Retro Reboot #1: Set It & Forget It – Scheduling FDMEE Tasks

As with most nostalgic items, reboots are the next best thing. From video game consoles to television shows, they are all getting a modern facelift and a new prime-time seat on television.  I have jumped on that band-wagon to revitalize a previous post authored by Tony Scalese: Set it & Forget It – Scheduling FDM Tasks.

As with most reboots, there must be flair and alluring content to capture old and new audiences. Since Oracle Financial Data Quality Management Enterprise Edition (FDMEE) has been in the Enterprise Performance Management (EPM) space for a while and has moved into the Cloud, this is a great time for its reboot!

Oh Great…A Reboot. Now What?

 Scheduling tasks in FDMEE has never been easier. Oracle provides several ways to do this for a variety of out-of-the-box activities.  Is there a report that you want to run and email every hour?  Or how about a script that needs to run hourly?  Or maybe batch-automation every 15 minutes?  No worries!  FDMEE can handle all of that with out-of-the-box functionality.

Let us pause for a moment and determine what is needed to make this happen:

  1. Is there a business case and justification for what is about to be scheduled?
  2. Who benefits and how will they be notified of the results?
  3. Is there a defined frequency for which the activity must take place?

Getting Started

First, understand that the scheduling for FDMEE is built directly into the Graphical User Interface (GUI) anywhere you see the “SCHEDULE” button. Unlike the previous FDM counterpart which had it as an independent utility to be installed/configured, the ease of having it via the Web has removed some complexity.

A word of caution:  while this screen allows items to be scheduled, there isn’t a screen that shows “what has been” scheduled.  To do that, access to the Oracle Data Integrator (ODI) is needed, but more on this later.

Initially, the screen shows the types of schedules that can be created and their relevant inputs.

Retro Reboot Screen Shot 1

Below is a reference guide to outline FDMEE’s scheduling capabilities.

Schedule Type Inputs Notes / Examples
Simple TimeZone, Date, HH:MM:SS, AM/PM Single run based on the specified inputs.

 

Example:  Run 08/02/2018 @ 11AM

Hourly TimeZone, MM:SS Repeatable run at the specified time MM:SS time.

 

Example:  Run every hour, at the 22minute mark.

Daily TimeZone, HH:MM:SS, AM/PM Every day at the specified time.

 

Example:  Run every day at 11AM.

Weekly TimeZone, Day of the Week, HH:MM:SS, AM/PM Every specified day at the specified time.

 

Example: Run every Monday thru  Friday at  11AM.

Monthly
(day of month)
TimeZone, Date, HH:MM:SS, AM/PM Specified day at the specified time.

 

Example: Run on the 2nd day of every month at 11AM.

Monthly
(week day)
TimeZone, Iteration, Weekday, HH:MM:SS, AM/PM Specified interval and week day at the specified time.

 

Example: Run every third Tuesday at 11AM.

Why Does the Job Run Under My UserID?

That is because the system assigns the user’s credentials who created the schedule. What can go wrong with that, right?!  Well, if a user no longer exists or a password is changed, the existing jobs will no longer run.

The following considerations should be observed:

  1. Dedicate a service account that is not being used by an employee to be used for server/automation actions.
  2. This account can be a “native” user; since the account is only used internally for EPM products, having a domain account is not needed.
  3. Non-expiry passwords are best.

 It is Scheduled…Now What?

After the item is scheduled, what really happens? The action executes at the scheduled time!  Actions can easily be monitored via the FDMEE Process Details screen.  Now all the possibilities of scheduling the following can be explored:

  1. Data Load Rules
  2. Script Executions
  3. Batch Executions
  4. Report Executions

Also, as mentioned earlier, there is no way to see the batches inside of FDMEE. For that, information can be retrieved in a few ways.  The easiest way to see what is scheduled is to use the ODI Studio.

The ODI Studio provides details as seen in the screen shot below:

Retro Reboot Screen Shot 2

Any scheduled tasks will be listed under “All Schedules.” Simply double click them to obtain details related to that task.

Retro Reboot Screen Shot 3

Another effective option is to write a custom report that displays the information. My previous blog, Easy Value with FDMEE Reports, post provides further details of FDMEE report options and their value.  This would allow a report to be executed to provide a user-friendly report.

Seriously … What Now?

By now, you may have noticed from the previous blog post http://classic.fdmguru.com/ups-shell/) that the upsShell process is quite handy.  It allows other tools to control the FDM jobs…maybe through a corporate scheduler.  Now that most organizations have a corporate scheduler, the new FDMEE options below must be learned:

Command Purpose
Executescript.bat / .sh Executes an FDMEE Custom Script
Importmapping.bat / .sh Executes an import from text-file for Maps
Loaddata.bat / .sh Executes a Data Load Rule
Loadhrdata.bat / .sh Executes an HR Data Load Rule
Loadmetadata.bat / .sh Executes a Metadata Load Rule
Runbatch.bat / .sh Executes a defined Batch
Runreport.bat / .sh Executes a defined Report

*All files are stored in the EPM_ORACLE_HOME\products\FinancialDataQuality\bin\

In the example below, the command, when launched, executes a Data Load Rule for Jan-2012 thru Mar-2012:

Retro Reboot Screen Shot 4

There still must be a better solution…right? Things to overcome:

  1. What happens if the scheduler is Windows-based and the server is Linux?
  2. How does a separate scheduling server communicate with EPM? Does it have to be installed on each EPM Server?
  3. How can we monitor and get details of a job once it is kicked off?

What Happens if You Don’t Want to Run the .BAT/.SH Files?

You’re in luck! With the introduction of new functionality to FDMEE, RESTful APIs are also now available.  With the RESTful APIs, not only can you execute a job, but you can also loop and monitor for the results.  This enhances the previous .BAT/.SH file routines and provides a cleaner and more elegant solution.

Command Purpose
Running Data Rules Execute a Data Load Rule
Running Batch Rules Execute a Batch Definition
Import Data Mapping Import Maps
Export Data Mapping Export Maps
Execute Reports Execute a Report

*URL construct: https://<SERVICE_NAME>/aif/rest/V1

The below example is just querying for a process:

Retro Reboot Screen Shot 5

The Future…

As Oracle moves forward to enhance the RESTful APIs, many doors continue to open for FDMEE and tool scheduling. At Edgewater Ranzal, we fully embrace the RESTful concept and evolve our solutions to utilize this functionality.  The result is improved support and flexibility of FDMEE and the future of Oracle Cloud products.

Contact us at info@ranzal.com with questions about this product or its capabilities.

Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

In the previous post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, I showed you how to rebuild ARCS down to the Profile Segments to speed things up. This time we’re slowing everything down…

So grab a glass a wine of wine and throw on your Marvin Gaye vinyl because we’re getting it on with ~~automation~~. Oh yeahhh…

The sexiest topic of account reconciliations (didn’t think you’d ever see that sentence, did ya?) consistently revolves around automation. Yes, ARCS provides a central repository. Yes, ARCS is auditable. YES, ARCS shows a traceable workflow throughout the reconciliation cycle. All of these features are highly useful and absolutely a prerequisite to an enterprise worthy solution, but if you want to really grab people’s attention in a design session, start talking about the things they won’t have to do. ARCS provides both out-of-the-box functionality as well as customizable tools that help preparers  focus on high-importance reconciliations rather than spending time on low value-add or monotonous items.

Automation occurs in two areas: outside of ARCS (e.g. data feeds) and within ARCS (e.g. auto reconciliations and rules). Setting up the former enhances the latter. Either Cloud Data Management (CDM) or Financial Data Quality Management Enterprise Edition (FDMEE) can be used to load data to ARCS, albeit in different manners, but how this is accomplished is beyond the scope of this post. This data can be sourced from a variety of general ledgers and sub ledgers/subsystems including Financials Cloud, E-Business Suite (EBS), PeopleSoft, JD Edwards, and even *gasp* Excel (…if we have to…). By automating these data feeds directly from the source, management can be confident in the validity of the data (e.g. accuracy, no manual intervention or “massaging,” live, etc.) and, with scheduling, administrators have one or more fewer task(s) to worry about. The latest application data is up-to-date by the time the office doors open. Additionally, data refreshes can occur multiple times throughout the reconciliation cycle without concern for loss of work. ARCS will only update reconciliations with differences from the last data load and will change the workflow status if data has been modified and needs to be looked at again.

Within ARCS, the “bread and butter” for gaining efficiencies in the reconciliation cycle is through utilizing the out-of-the-box auto reconciliation method property on the Profiles. This will set the conditions under which the reconciliation will automatically change the workflow status to “closed,” allowing preparers to focus on the remaining “open” reconciliations that require attention. Which conditions are available for selection depends on the Format type. Furthermore, this field can be easily updated after-the-fact. Using the Actions pane, this property can be updated to a mass of Profiles based on custom filtering.

Automation in ARCS 1

[Screenshot 10a: The “Set Attribute…” functionality from the Actions pane is a powerful tool that can be used to make mass updates from the user interface.]

 

Automation in ARCS 2

[Screenshot 10b: In this example, the “Set Attribute…” functionality can be used to make updates to the Auto Reconciliation Method property for all Profiles, selected Profiles, or Profiles that fit customized criteria.]

The “Set Attribute” functionality is a powerful tool for making changes across multiple Profiles within the ARCS user interface. In many instances, this is a preferable alternative to extracting the Profiles to a text file to modify offline. Screenshots 10a – 10b show how it can be used to update the Auto Reconciliation Method attribute specifically, but there are a plethora of other attributes that can be updated in this manner.

The last puzzle piece to the trinity of automation is customized rules. Similar to custom attributes, rules can be added in a variety of places within your reconciliations to further enhance and streamline the process for both end-users and application administrators. Attributes, formats, profiles, and even specific transaction types (ex. on Subsystem Adjustments, but not on Source System Adjustments) can contain separate sets of rules.

Automation in ARCS 3

[Screenshot 11a: Rules can be added at a Format level.]

 

Automation in ARCS 4

[Screenshot 11b: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a Format level.]

 

Automation in ARCS 5

[Screenshot 11c: Rules can be added at a specific transaction type. In this screenshot, any rules created here would only affect Subsystem Adjustments and would not affect System Adjustments.]

 

Automation in ARCS 6

[Screenshot 11d: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a specific transaction type level.]

Thus, rules can be used for anything from sweeping, application-wide changes down to differences at a transaction-by-transaction basis, as seen in Screenshots 11a – 11d. If ARCS is a suit, then rules are custom tailoring; they are made to fit your company’s specific needs.

The most common rule I see relates to Auto-Submission (as opposed to Auto Reconciliation). The out-of-the-box auto reconciliation methods previously discussed are set on Profiles and can be used to “close” a reconciliation for the period if the criteria is met. However, sometimes a reconciliation still needs reviewing such as if it is considered higher risk or only during certain periods in the fiscal year. Customized rules can dynamically determine which reconciliations can skip the preparer and be assigned directly to the reviewer, and which are clear to be automatically “closed” for the month (e.g. without approval by a preparer or reviewer). Tailoring rules in this manner still helps the preparers reduce their workload while giving management the confidence that the higher priority reconciliations are being reviewed – the best of both worlds!

No Mistakes with Modularity from “Day 1” to “Day 100”
So, there you have it: the four main manifestations of ARCS’ modularity. While nothing will replace proper planning, ARCS does not permanently punish any application decisions you (or your partner) have made in the past. The tool is able to grow with your company and accommodate your needs as they arise. There’s no reason to pick “today” or “tomorrow” – have them both.

Am I right? Am I off my rocker? You tell me! Answer in the comments below if ARCS’ (or ARM! We haven’t forgotten you…) has been able to accommodate the changes with your company’s growth.

If you like what you’ve read, please consider sharing this article through social media. And let me know in the comments what topic(s) you would like to see covered in future posts.

*Screenshots taken from the patch 1806 release.

Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up

We talked about adding new scope in New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons and modifying your application inside (i.e. changing reconciliation methods) and outside of ARCS (i.e. new data feeds) in Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning.

Today, we’re going to tear it down and rebuild from the ground up.

Let me start with this:  redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

ARCS From the Ground Up 1[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

ARCS From the Ground Up 2[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

ARCS From the Ground Up 3[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

ARCS From the Ground Up 4

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

ARCS From the Ground Up 5[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Join me in the last post of the ARCS modularity series – a real crowd pleaser: Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

*Screenshots taken from the patch 1806 release.

Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning

In the last post, New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons, we discussed how ARCS sets you up to easily add on additional scope to your existing application and scale your solution. However, not all changes are brand new. Clients are often concerned with being pigeonholed based on their “Day 1” decisions. A common question I am asked during a design session is “Can I manually enter this reconciliation today, but create new feeds to automatically load the data tomorrow?” The answer is a resounding YES, and it provides clear added value to the next phase of any ARCS (or ARM) project. It can be a viable project strategy to set up reconciliations using an Account Analysis format on “Day 1” and change to a Balance Comparison format when automated data loads are built on “Day 100.”

Modifications in ARCS 1

[Screenshot 5a: Reconciliation 100-1000 is setup with a Balance Comparison format in Sep 2017.*]

Modifications in ARCS 2

[Screenshot 5b: The previous period’s reconciliation can be viewed in the Prior Reconciliations tab.*]

Modifications in ARCS 3

[Screenshot 5c: Reconciliation 100-1000 was previously setup with an Account Analysis format in Aug 2017. The format of a profile can be changed while maintaining the Prior Reconciliations link.*]

Depending on how this change is made, it is even possible to keep the modified reconciliation “linked” to the previously completed reconciliations even though the Format has changed, such as in Screenshots 5a – 5c. The ease with which ARCS allows you to change Reconciliation Methods (via Formats) gives you the flexibility to not bite off more than you can chew in the beginning of a project.

Changing Reconciliation Methods is often related to new integrations. Moving from the manual “fat fingering” of data to directly loading general ledger and sub ledger balances through Financial Data Management Enterprise Edition (FDMEE) or Data Management combined with the inbuilt auto-reconciliation tools can bring a “quality of life” change for end users as well as added confidence in the data’s integrity. It is always a best practice to pull data from the source. Creating the integration from the general ledger is typically part of the initial scope. The usual candidates for building additional feeds after the first project phase are the sub ledgers related to fixed assets, accounts receivables, and accounts payables. However, the most “bang for your buck” as it relates to what integrations to build depends on your line of business and specific company requirements.*

*Note that adding multiple general ledger feeds introduces additional complexities beyond the scope of this article. Please consult with your Oracle partner before adding to your application.

In some cases, the greatest efficiencies to your existing reconciliation process are gained in utilizing the power of ARCS Transaction Matching. This module is better suited to handle massive data volumes at a transactional level. As an example, instead of performing just a reconciliation of the balance sheet’s intercompany balances in ARCS Reconciliation Compliance at the end of the month, an enhancement to this process could be to perform the daily matching process in ARCS Transaction Matching to clear up issues in real time as they arise. This simplifies the month end’s reconciliation. ARCS Transaction Matching is a powerful supplement to an existing reconciliation system and continues to receive special attention from Oracle as seen with the major release of new functionality in Patch 1805.

Just as there are many ways your company can change, ARCS can be modified to match your needs even in a live application. However, sometimes changes are more fundamental than a bit of tweaking such as in an acquisition or the introduction of a new, company-wide general ledger. Or, perhaps, you are just not satisfied with the solution design. Join me in the next post as we discuss the dangerous topic of redesign in ARCS – what is possible…and what it costs.

In the next post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, learn how redesign IS possible in ARCS.

*Screenshots taken from the patch 1806 release.

Don’t Let Incremental Overtime Plague Your Healthcare Organization!

Get to the Root Cause: Increase Productivity and Patient Care While Reducing Labor Costs

The Causes and Consequences of Incremental Overtime

Incremental overtime may be costing your healthcare organization thousands of dollars unnecessarily and result in decreased employee morale and poor productivity, so it’s important to understand its root causes by gaining the ability to track overtime. A Labor Productivity/Labor Management solution that delivers key analytics provides specific answers to the root causes of incremental overtime.  Common causes include:

  • Early clock-in/late clock-out
  • Inability to complete required tasks by end of shift
  • Shift transition conflicts (i.e. last minute attending to patient needs or handoff not yet completed)

The Solution and its Benefits

A Labor Productivity solution provides data for labor hours so that ratios can be derived based on each organization’s definition of incremental overtime, and this leads to a clear understanding of the root causes of incremental overtime so that corrective action can be taken, including:

  • Ensure management visibility at change of shifts
  • Employee coaching/staff meetings to aid time management skills
  • Provide daily reports/analysis to managers to establish protocol for handling incremental overtime risks
  • Designate a synchronized clock that employees should rely on (i.e. department wall clock)
  • Educate employees on OT authorizations – cite repeated behavior in performance evaluations

Incremental Overtime 1

By addressing the causes of incremental overtime using data provided by a Labor Productivity solution, providers can deliver great patient care while decreasing labor costs by thousands of dollars and increasing productivity.

Incremental Overtime 2.jpg

 

Don’t Fear the Statistics – Using OBI for Statistical Analysis Part 2

Nearly every client Edgewater Ranzal partners with uses statistical averages in their analytic and reporting solutions. As far as statistical functions go, it is probably the easiest to understand, however; the limitation of using the average is that it can be difficult to determine how to rate the individual performance of contributors to that average.  Consider the following examples:

  • The average cost of a gallon of milk is $3.20 and the corner convenience store is selling it for $3.45, is that a significant deviation from the average?
  • If the average NFL player’s base salary is $1.86 million and Tennessee Titan’s Marcus Mariota made $5.5 million, is this an exceptional payout? Is the salary significant when his role as the team’s starting Quarterback is considered?
  • Suppose the average gross margin percent for a company’s business units is 58% and one particular business unit’s actual gross margin is 46%. Is that business unit truly underperforming?

It turns out that the average of a particular measurement is very subjective. In this post, we explore how the standard deviation of the average can be used to mitigate subjectivity and how it can be incorporated into data visualizations to identify true outliers.

The NASDAQ-100 is comprised of the largest domestic and international non-financial companies (based on market capitalization) listed on the Nasdaq Stock Exchange. It includes technology giants such as Apple and Alphabet (parent company of Google) along with consumer services such as Bed, Bath, & Beyond.  The quarterly gross margin percent from 2007 to Q3 2016 was downloaded and loaded into a data mart leveraged by Oracle Business Intelligence Enterprise Edition (OBIEE) 12c.  (Q4 2016 data was not available for all companies).  With the exception of Figure 1, the following visualizations were created in OBIEE 12c.

The standard deviation can be thought of as ranges that can be used to classify individual contributors to the average. For instance, the average gross margin percent for the NASDAQ-100 in Q4 2014 was calculated to be 59.9% with a standard deviation of 22.7%.  This can be visualized on a number line as such:

Figure 1 NASDAQ-100 Q4 2014 Gross Margin % Performance Ranges

dont-fear-statistics-part-2-figure-1

Many real world events that have variability follow a predictable distribution pattern. For instance, it is expected that approximately 34.1% of the contributors will fall between the average and one standard deviation up.  From the figure above, it is estimated that approximately 34 of the NASDAQ-100 will have a gross margin percent between 37.2% and 59.9%.  The actual distribution can be visualized as such:

Figure 2 Distribution of NASDAQ-100 Gross Margin %

dont-fear-statistics-part-2-figure-2

The NASDAQ-100 companies do not perfectly follow the distribution; there is a fatter spread into the Negative and Positive buckets (Two Standard Deviations down and up). Other, more advanced statistical methods can be used to redefine ranges, but are beyond the scope of this post.

Of course, this visualization simply confirms statistical theories that were proven over a hundred years ago. The true value of analytics is to take statistical theories and turn them into informative visuals.  One method of visualizing the ranking of companies using the standard distribution in OBIEE 12c is through a Treemap:

Figure 3 NASDAQ-100 Distribution Treemap Visualization

dont-fear-statistics-part-2-figure-3

The size of the box represents the Gross Margin % while the color aligns with the distribution ranking from Figures 1 and 2. This visualization allows the viewer to understand both the rankings and relative performance at a glance.  It is easy to discern the delineation between above and below average (border between yellow and light green) as well as which companies are herding together.

One of the most powerful and essential aspects of business analytics is the ability to dimensionalize data so it can be sliced and diced. One (of many) reasons this is done is to be sure that there is an “apples to apples” comparison.  For instance, comparing the gross margin percent comparison between Qualcomm (QCOM), a semiconductor and telecommunications company, and Ross Stores (ROST), a discount department store, can create misconstrued distributions.  Filtering the visualization in Figure 3 by the NASDAQ industry classifications for Technology companies results in the following Treemap:

Figure 4 NASDAQ-100 Technologies Companies Treemap

dont-fear-statistics-part-2-figure-4

Notice that Qualcomm has slipped from “Moderately Positive” to “Moderately Negative.” Averages and standard deviations can change dramatically when looking at the components of the whole.  To demonstrate this, consider the following visualization comparing the average and deviation spread of the three largest categories (by number of companies) of the NASDAQ-100:

Figure 5 Average and Standard Deviation by Categories

dont-fear-statistics-part-2-figure-5

The border between yellow and light green represents the average while each band represents one standard deviation. Notice that the average gross margin % as well as the standard deviation is higher for Healthcare than for Technology.  Healthcare companies are going to skew the performance perspective of Technology companies.  This skew worsens when comparing against companies classified as Consumer Service.

As a general rule, a single point is not the best indicator of long term performance. Although the average and standard deviation for a single quarter was calculated through the agglomeration of one hundred companies, it should be considered a single data point.  Consider the following visualizations that show a comparative trend for four different companies for the entire date range downloaded:

Figure 6 Gross Margin % Trend for Adobe, Amazon, Electronic Arts, and Priceline

dont-fear-statistics-part-2-figure-6

At a glance, viewers can see that Adobe (upper left) consistently beats the average performance while consumer goods and technology giant Amazon (upper right) has been performing below average until recently. Electronic Arts (lower left), a video game developer, seems to have erratic gross margin % returns; however, looking past the noise, the company is nearly always between moderately positive and moderately negative when compared against other NASDAQ-100 companies.  Finally, Priceline (lower right) has been increasing gross margin % consistently and steadily pulling ahead of other NASDAQ-100 companies.  If Priceline’s gross margin % trend continues and the performance of the other companies remains constant, Priceline will move into the “Extremely Positive” gross margin % ranking in Q4 2016 or Q1 2017.

Returning to the questions posed at the beginning of this post:

  • The average cost of a gallon of milk is $3.20 with a standard deviation of $0.08. The corner grocery store selling milk for $3.44 is three standard deviations above the average!
  • The average NFL base salary is $1.86 million with a standard deviation of $2.80 million. Comparatively, Marcus Mariota’s $5.50 million salary is one standard deviation above average. However, with the average quarterback base salary being $5.69 million with a standard deviation of $7.17 million, he is actually minimally undercompensated.

For the final question, we ask the reader to evaluate his enterprise:

  • Calculate the average gross margin percent for your company’s business units for the quarter and find the business unit that is approximately 10% less than that average. Are they truly underperforming? Are you able to properly classify these business units to gain the greatest insight into relative performance?

Average and standard deviation can be applied to any metric by which a company wishes to evaluate itself. It can be used in combination with external data to create industry benchmarks.  For instance, if you were to plot your company’s gross margin % performance against the trends above, how would it look?

We want to close this post with the same idea that we closed Part 1 of the “Don’t Fear the Statistics” post: statistical analytics is part science/technology and part art.  Reducing statistical calculations to consumable visualizations is the key.  In the visualizations above, references to “standard deviation” were diligently omitted in favor of familiar terms such as “Moderately Negative.”  Approaches such as this help with change management, adoption, and the acceleration from simple reporting to true analytical insight into business process improvement based on data.

Don’t Fear the Statistics – Using OBI for Statistical Analysis Part 1

Recently, Ranzal has been working with a client in the healthcare space implementing Oracle Business Intelligence (OBI), and a requirement surfaced to translate a scorecard report into an OBI dashboard. One of the data elements was simply captioned “Trend” and colored red, yellow, and green.  It was discovered that this Trend was the slope of a linear regression plot (more on what that means in a moment) and the color was based on an arbitrarily chosen number.  This immediately raised some concerns from the Ranzal team who then made some suggestions for more pertinent statistical analysis.

To set the stage, this healthcare client’s summarized (and greatly simplified) income statement divides Revenue into Inpatient and Outpatient and Expenses into Total Labor and Non Labor. Revenue and expenses are the primary focus of much of the analytics at an aggregate level.  A single (seemingly arbitrarily chosen) number was used to determine the colored flags for each of these measures.  This was despite Inpatient Revenue and Non Labor Expenses comprising the majority of the revenue and expense amounts (respectively).  If we were to plot out these categories for the first five months of a fiscal year, we see the following (all data have been altered to preserve client confidentiality without overly affecting the overall analytic output):

figure-1

Figure 1 Revenue and Expense Trend Plot

The trouble with plotting a trend of numbers is that it is sometimes difficult to understand, at a glance, how the organization is performing. In the plots above, clear downward and upward trends can be seen for Inpatient Revenue and Total Labor Expense (respectively).  However, upon closer examination of Outpatient Revenue and Non Labor Expense, there are two upward trending months and two downward trending months.  The overall trend is difficult to discern.

With the introduction of Oracle Business Intelligence Enterprise Edition (OBIEE)12c, a Trendline function was introduced that allows the creation of a linear regression trendline. Once this is applied, the above trend plots can be augmented to get a clearer picture of performance:

figure-2

Figure 2 Revenue and Expense Linear Regression

This trendline uses a simple linear regression formula that is comprised as the slope (commonly represented by the letter m) and Intercept (commonly represented by the letter b) in the following formula:

y = mx + b

In our trend plots, the letter y represents the revenue and expense categories and x represents the fiscal periods.

The intercept is where the trendline crosses the y-axis when x is equal to zero. For most statistical analyses, the intercept is unimportant.  The slope can be thought of the average change over the two parameters.  Using OBI, the slope of each revenue and expense category can be calculated and the dashboard updated:

figure-3

Figure 3 Linear Regression Slope

In the example above, the slope of the Inpatient Revenue can be thought as decreasing an average of $291,000 a month.

One issue with using the slope is that it is subjective.  As was mentioned, our healthcare client had chosen a single arbitrary slope for each of the revenue and expense categories.  The slopes in the example above range from 29 thousand to -291 thousand.  Complicating matters, the client wanted the ability to run these Analysis for individual hospitals which can dramatically affect the slope.  For instance, a hospital operating in Kansas City will probably not have the same revenue growth (or shrinkage) as a hospital operating in New York City.  To use the slope as a quantifiable objective properly, a target slope would have to be determined for the enterprise and at each granular level expected to be benchmarked (hospital, department, etc.).  This creates some obvious maintenance issues.

A more objective approach is to use the correlation coefficient, a number on a range from negative one to positive one. A correlation ranking of one indicates a positive correlation while a ranking of negative one indicates a negative correlation.  For instance, for most companies, the number of units sold is often has a high degree of positive correlation to revenue.  This would correspond to a correlation coefficient of close to one.  For many companies working in the commodities market, the more competitor’s revenue increases, the lower the possible market share.  This would be a negative correlation and result in a correlation coefficient calculation of negative one.  A correlation coefficient of zero indicates a lack of any correlation.  For instance, the number of broken arms set in a New York hospital is probably uncorrelated to the number of bowls of soup served by Panera Bread in Kansas City.

It is worth noting that correlation does not mean causation. For example, consider the number of pirate attacks and users of Microsoft Internet Explorer (IE) users:

figure-4

Figure 4 IE Usage and Pirate Attacks

The number of pirate attacks and IE users have both been in decline since 2009. As can be seen by the scatter graph on the right, the more pirate attacks, the greater the use of IE.  Regardless, naval security experts are probably not asking for adoption rate reports from Microsoft.

Returning to the client’s use case, adding the correlation coefficient to the dashboard provides a greater understanding of how the company is objectively performing:

figure-5

Figure 5 Month and Revenue / Expense Category Figure Correlation

Inpatient Revenue has a correlation of -0.69, which is moderately significant for a metric most businesses want to increase. Conversely, the Outpatient Revenue has a slightly negative correlation of -0.36.  While this should be a cause for concern, a “wait and see” approach (or deeper dive into Outpatient Revenue Categories) might be more prudent.  Because the range of the correlation coefficient is negative one to one, filtering this analysis down to a more granular level, such as a hospital or department, will return an objective number that can be subjected to independent interpretation.

There are cases in which the subjectivity of the slope is particularly useful. In the case of our client, a full year budget was prepared at the beginning of the fiscal year and periodically updated as the year progressed. The slope of this budget could be used to generate the average dollar change desired per month.  The advantage of this is that it reduces the possible volatility of a particular month into a single number that can be compared to the benchmark.  As a final addition to the dashboard, a full year budget slope was added:

figure-6

Figure 6 Full Year Budget Slope

With the exception of Non Labor Expenses, this organization is missing the mark on all of their budgetary goals, and the trend indicated by the actual slope and correlation coefficient means this situation is likely to get worse.

A word of warning about statistics in general and the use of slope and correlation coefficient in particular: micro and macro trends can should be considered and extreme outliers can mask actual trends.

For an example of micro and macro trends, consider JCPenney, a retailor that has been struggling since 2010. The following visualization (created using Oracle Data Visualization Desktop) charts the quarterly revenue from 2004 Q3 to 2016 Q4 along with the trendline for the entire period.  The bars represent the correlation coefficient to that particular quarter (i.e. the first bar is the correlation between 2004 Q3 and 2004 Q4 while the second bar is the correlation between 2004 Q3, 2004 Q4, and 2005 Q1, etc.):

figure-7

Figure 7 JCPenney Revenue Trend and Correlation

Notice that the first correlation bar is equal to one. When there are only two data points, the correlation coefficient will be equal to one, negative one, or zero.  The next data point and correlation for 2005 Q1 (JCPenney recognizes holiday revenue in Q1 of each year) continues the high correlation streak, however, the following quarter drops the correlation down to 0.35.  The correlation fluctuates quarterly until about 2012 Q2 when the definite downward trend is established.

A savvy analyst will break JCPenney’s performance during this time range into three distinct trends. Upward trending from 2004 to 2008 Q1, diminished upward trend from 2008 Q2 to 2012 Q1, and then a flat, but greatly reduced revenue from there:

figure-8

Figure 8 JCPenney Distinct Trends

As an example of how an extreme outlier can affect statistical analysis, consider GTx Incorporated, a pharmaceutical drug developer. In December 2010, GTx recognized $49.9 million dollars in revenue from a partnership with Merck& Co., Inc., which spiked GTx’s revenue (previously averaging $2 million a quarter) to $56.7 million dollars:

figure-93

Figure 9 GTx Incorporated Revenue Trend

In the visualization above, the orange projected trendline was calculated using revenue from 2004 Q1 through 2009 Q4. The purple trendline is the projected calculated using 2010 Q1, which includes the huge revenue spike.  Obviously, the orange trendline is the more accurate due exclusion of the extreme data point.

Statistical analytics is part science/technology and part art. As with any data and visualizations, a certain degree of intelligent interpretation is needed to determine what it all really means.  Functional users should be focused on what the various statistical interpretations mean and not be distracted on the complexity of the underlying mathematical functions.  Trend visualizations can aid users in understanding how to interpret these statistical calculations.  Many organizations miss opportunities because of individuals unwilling to embrace statistical methods due to the lack of solid education and guidance about what these numbers really mean.  Training, change management, and the creation of rich visualizations can help enterprises harness the capabilities of statistical analysis and extend the role of their business intelligence systems.

Accelerate Your Ride to the Cloud: Extending ERP with Oracle Profitability & Cost Management Cloud Service (PCMCS) for Standard Cost Rate Development

A common need among manufacturing organizations is improvement in the process of developing annual labor and overhead standards to use as input into standard cost rates for product cost and inventory valuation. In spite of the investments that have been made in ERP solutions, it is typically an offline Excel-based exercise that is required to take historical data from the ERP to determine the updated direct labor rate & overhead rate components of a product standard cost for an upcoming fiscal year.  The release of Oracle Profitability and Cost Management-Cloud Service (PCMCS) in October 2016 provides a unique opportunity for manufacturers to ease, streamline and document the process of generating the cost-per-direct labor hour or cost-per-machine-hour rates that are requisite in standard costing.

Background

Generally accepted accounting principles (GAAP) allow for one of multiple methods for the valuation of inventory to a manufacturer: Last-In, First-Out (LIFO); First-In, First-Out (FIFO); or a Weighted Average.

Because prices for labor and materials fluctuate throughout a year and inventory is built or drawn, it is difficult to track inventory on an on-going basis using these methods. Further, from a management perspective, it is more meaningful to separate the effects of price changes and inventory builds/draws from values associated with normal business.  Pricing decisions, incentive compensation and matching expenses to the physical flow of goods would all be adversely impacted by trying to constantly manage to these methods.

A common approach to achieve meaningful inventory and cost of goods sold values is to establish a “standard cost” for every product and then adjust the value of inventory on a separate line at year-end, to bring it to the GAAP basis.

This standard cost requires direct labor, direct material and an inclusion of an amount representing the “absorption” of certain of plant-related overhead costs into the inventory value.

There are two forms of overhead that must be included in the inventory value from a GAAP perspective: 1) Labor overhead and 2) Manufacturing overhead, sometimes called Indirect Overhead.

  1. Labor overhead represents the costs of direct labor resources above and beyond their direct hourly wage rate. This amount includes payroll taxes, retirement and health care benefits, workers’ compensation, life insurance and other fringe benefits.
  2. Manufacturing overhead includes a grouping of costs that are related to the sustainment of the manufacturing process, but are not directly consumed or incurred with each unit of production. Examples of these costs include:
  • Materials handling
  • Equipment Set-up
  • Inspection and Quality Assurance
  • Production Equipment Maintenance and Repair
  • Depreciation on manufacturing equipment and facilities
  • Insurance and property taxes on manufacturing facilities
  • Utilities such as electricity, natural gas, water, and sewer required for operating the manufacturing facilities
  • The factory management team

The most common first step for determining the value of overheads in inventory is to use a predetermined rate that represents a cost charge per direct labor hour or cost per machine hour. From product bills of material and routings, the total number of hours or labor or machine usage for a unit volume of production is known. The value of the overhead cost rate per direct labor hour (or machine hour) x the number of hours required per unit of production, yields the overhead cost rate per unit. In the example below, the ERP will calculate the cost per work center, but it is reliant on the Direct Labor and Overhead Rates to complete this process.

dp-image-1jpg

The challenge comes when calculating the applicable pre-determined rate for overhead per direct labor hour or machine hour by the applicable cost or work center. PCMCS can assist with automating and updating this process.

A Better Solution: The Ranzal PCMCS Standard Cost Solution

PCMCS provides the ability to quickly and flexibly put the creation of multi-step allocation processes into the hands of business users. It also provides for the management of hierarchies without the need for external dimension management applications as well as standard file templates for data upload.  Further, a series of standard dashboard and report visuals augment the viewing and monitoring of results.  These capabilities allow organizations to quickly load and allocate expenses to applicable overhead cost pools and then merge those cost pools with applicable labor or machine hour values to obtain the relevant overhead rates.

PCMCS allows users to quickly select the cost centers or work centers that are applicable as sources to be included in the overhead rate:

dp-image-2jpg

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-3

as well as the operational metric to use to assign these overhead costs to their applicable pools.

dp-image-4

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-5

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). Following the release of PCMCS, Ranzal will be announcing a Cloud servicing offering that will leverage the power of the Cloud to provide an accelerated method of producing the required inputs for overhead allocation in standard costing.

More than just Standard Costing

Additionally, while PCMS provides an excellent way to develop overhead rates for standard costing, it can simultaneously be utilized to determine allocations and costing valuations that leverage other methodologies for product and customer costing and profitability. Much has been written about the potential for inaccuracies if the standard cost basis of overhead allocation in product costing were to be used universally or exclusively for management analysis.  Overhead has become such a large portion of the total cost, that in many cases, overhead rates can be three or four times higher than their respective direct labor rates.  This suggests a general lack of causality between overhead and direct labor hours in many cases, and this has led to the evolution of other methods for costing.  Activity Based Costing is one such example, while simply allocating manufacturing variances to product lines is another.

PCMCS can be used to meet the requirements for both the externally reported methods and the management methods of product costing.

All of the Results in One Place

Determining the method by which overhead should be captured in the cost of different products of inventory is an important process because it represents a step by which a large number of dollars is moved from an expense to an asset, usually temporarily but sometimes permanently, and this can impact profitability and stock share price.

For the purpose of valuing inventory for statutory reporting, the overhead rate method is considered acceptable and it is widely used. It is therefore important that organizations find a way to develop and manage these cost valuations in a manner that is well-documented, has transparent methodology and is one that reduces the amount of time spent on the process.  However, it is not the only method that should be used for considering overhead in product and customer costing and profitability analysis.  Further, selling, general and administrative expenses (SG&A) represents another layer of cost that while not part of standard inventory cost, should be considered in overall product costs from a management perspective.

To this end, the Edgewater Ranzal PCMCS Standard Cost solution will provide an opportunity to fulfill multiple needs in costing and profitability and will do so in a manner that will be faster and more user-friendly than what has previously been experienced.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

Oracle Business Intelligence EPM and Relational Federation – A Strategic Approach

The federation of EPM and relational data sources through Oracle Business Intelligence (OBI) seems straightforward: import the cube, federate and rename, expose it all, and create dashboards and analysis. Due to the technical simplicity of EPM and relational federation, many organizations underestimate the amount of effort needed to implement an OBI solution that properly leverages and extends the capabilities of the EPM and relational data sources.  The OBI implementation process should not be an afterthought, especially if OBI is to be the primary method by which users consume organizational data.  We have assembled ten “Dos and Don’ts” that cover the full lifecycle implementation to help organizations get the most out of their OBI solution.

Do – Design and develop the data sources with input from the OBI implementation team

Especially in implementations where OBI is to be the primary method of consuming data, the OBI implementation team should have been heavily involved in Dashboard and Analysis requirements and design. As such, this team will have the knowledge of what data structure is needed to support an efficient and easy-to-use analytic solution.  Asking the OBI implementation team to come in after the data model has been set and create Dashboards and Analysis will often result in workarounds that are error prone, difficult to maintain, and challenging or impossible to scale.

Don’t – View OBI as a one-size-fits-all analytics and reporting tool for the organization

OBI is a powerful and versatile tool capable of addressing a slew of needs; however, it is not a magic bullet. Depending on the application and needs of the organization, Smart View, Financial Reporting, and even BI Publisher have their places in the organization.  Attempting to replicate the capabilities of other analytic and reporting tools through OBI may provide the illusion of capability, but will fall short of user expectations and possibly harm adoption by the rest of the organization.

Do – Have a metadata management process in place before federating data sources

We discussed the rationale of this best practice thoroughly in the post Oracle Business Intelligence – Synchronizing Hierarchical Structures to Enable Federation. To summarize, unsynchronized hierarchical structures between data sources can result in analysis with outcomes that are irreconcilable, seemingly reorganize while drill down or up, display erroneously shared members, or simply result in errors in OBI.  A centralized process for managing this metadata as well as ensuring that all relevant data sources are updated simultaneously is imperative when federating data sources.

Don’t – Treat OBI as a metadata or master data management tool

This is typically a symptom of not having the OBI implementation team involved during the design of the data models. As a result of this misalignment, clients attempt to shoehorn analysis into the data model by using the BI Administration tool (RPD) to excessively manipulate the data model.  Properly leveraged, the BI Administration tool can create an agile analytics solution; however, relying on this tool to fill large gaps between the data model and analytics will result in performance and maintenance issues.

Do – Define a use case, user community, and requirements for all implementations

From proof of concept to full implementations, having the right people involved is imperative. Within your organization:  Who understands the reporting and analytic needs and gaps?  Who understands where the data is coming from?  Who understands what capabilities are needed?  Who is positioned to help user adoption?  Who is asking questions that the organization is struggling to answer?  Any technology implementation that is done with the intent to “throw it against a wall and hope something sticks” is destined to fail; OBI is no different.

Don’t – Expect that users will flock to OBI if EPM is the only data available

We find that when there are both EPM and relational data sources, EPM is often the first to be implemented and exposed through OBI. During these implementations, users are extensively exposed to Smart View and finance users become especially enamored with the tool and struggle to immediately see value in OBI.  A Pavlovian response is to simply federate the EPM cube’s relational data source which typically provides a lower level of detail (or granularity).  While this is sometimes useful to users, it is still not providing the additional insight users cannot readily get elsewhere.  Federating additional data sources with EPM cubes should provide additional attributes or measures or provide a simple path to jump from one organizational view of the data to another.  For instance, a financial consolidation EPM cube federated with an operational relational data source provides an easy-to-use analytical solution for managers with responsibilities that straddle both worlds.  These users will quickly adopt OBI and help with future user adoption.

Do – Empower the users

Guided analysis through Dashboards, Analysis, Alerts, and Scorecards is a powerful tool; however, an organization will never address every scenario through this method. Guided analysis should be an introduction to OBI for users which should quickly be developed into self-service.  Within a few months of rolling out the OBI solution, power users should be assembling ad hoc analysis and putting together their own dashboards.  Within a year, most users should be answering basic questions on their own.  Organizations that empower users are not only improving the ROI on OBI, but they are also more agile in addressing changing business landscapes, accelerating user adoption, and reducing the load on (often) overburdened IT organizations.

Don’t – Neglect the performance of any data sources

The demand for data is the epitome of just-in-time logistics. Especially when users are empowered, many organizations find that their data sources and caching strategies are not sufficient for how users are actually leveraging the data.  EPM and relational data sources both have performance monitoring capabilities that should be frequently evaluated during the months after initial rollout and periodically evaluated thereafter and any deficiencies addressed.  Failing to address performance issues will result in users abandoning and circumventing the analytic tool, resulting in loss of productivity and data quality issues.

Do – Pivot to using OBI as an analytics tool instead of simply another reporting tool

Tabular reporting is typically (and should be) the first use for OBI that clients turn to, but this should be viewed as an insertion point and not the final rally point. With capabilities such as graphs, heat matrices, treemaps, gauges, alerts, and trellis, pivoting from reporting to analytics should be the goal.  Answering business-critical questions, quickly understanding the business landscape, and gaining insight is where the true value of OBI lies.  Simply leveraging OBI as another reporting solution is severely handicapping the tool’s return on investment.

Don’t – Let OBI data sources become static

Analytics is one of the few tools that simultaneously changes a business in a deliberate and serendipitous manner. A well-led and strategically executed analytics program can have a lasting contribution to an organization’s goals.  At the same time, users will develop new skills and capabilities as they become familiar with both the tool and the data and begin to ask new questions.  As both the competitive landscape changes and organizational capabilities expand, data models should be evolving to address these new needs.  OBI has the ability to easily expose, slice and dice, and visualize data to answer these questions; the challenge is to not become complacent in providing new data resources to users.

If OBI is to play a role in your organization’s analytic strategy, it should not be an afterthought. Involving implementation team members with the knowledge of OBI’s capabilities from the start can help ease implementation during the later phases, accelerate user adoption, and increase the long term ROI.  Edgewater Ranzal has both the technical and functional implementation experience with OBI to help you evaluate, adjust, and execute your analytic strategy according to these ten “Dos and Don’ts.”