Worry No More! Say Goodbye to Pain and Frustration when Submitting Service or Enhancement Requests with Oracle for PCMCS

While nobody likes submitting Service Requests (SR) on the Oracle support site, this is a necessary task that we must get comfortable with, whether our applications are on-premise or in the Cloud.  After 12 years of consulting, I can say that I have seen or pursued many wrong ways of submitting an SR which, in turn, yields results along similar lines – a lot of back-and-forth emailing with Oracle’s support staff, personal frustration, misinformation, and most importantly – time wasted on all sides.

Worry no more!  Here is a list of things you can do to avoid further pain and frustration when submitting Service Requests or Enhancement Requests with Oracle for Profitability and Cost Management Cloud Service (PCMCS).

  1. Where do I start when submitting SRs and ERs for PCMCS?

You can still use the generic Oracle Support website to open either an SR or an Enhancement Request (ER) with Oracle for Cloud applications, but the right way to do this is to first gain access to the Oracle Cloud Support website which looks slightly different and has a couple of new fields to complete. The email associated with the Oracle account should be the same email that has access to specific Cloud subscriptions.

Standard Oracle Support website

PCMCS Image 1

Cloud Support website

PCMCS Image 2

  1. Provide feedback

Login in to the Cloud application for which you want to create the SR or ER, and once you are logged in to PCMCS, navigate to your user name (top right) and select “Provide Feedback.” A new screen will appear enabling you to highlight the area of concern to provide context for the reason you are submitting the SR or ER.

Provide details around the area of concern. This gives context to the issue at hand and creates a reference for future troubleshooting. For example, if the issue is related to one specific Rule, ensure that the last screen open before you click on Provide Feedback is on the rule itself, or open to the job library listing the execution of the rule. You will only be able to highlight areas on the last screen open before launching the “Provide Feedback” screen.  The details you provide here will not automatically be copied into your SR. If you want to describe the issue in detail within this section, you can copy the same text within the SR itself – save it locally before submitting the feedback.

  1. Options for your feedback.

After you submit your feedback, a new panel will come up and will contain the following 3 sections:

  1. Environment: a listing of your Browser, Platform, Version, Locale, Resolution, Time zone, Cookies enabled (Y/N), URL of the instance, and the User Agent. You do not have to fill in anything in this section. All information is filled in for you.
  2. Plugins: a listing of enabled plugins, if any. You do not have to fill in anything in this section. All information is filled in for you.
  3. Confirm Application snapshot submission: this is the only section where you must provide input.

PCMCS Image 5You have a choice of Yes / No – depending on how comfortable you feel about Oracle using your daily maintenance snapshot for regression testing in upcoming releases. Giving Oracle access to your maintenance snapshots means you are agreeing to them using the model and any related data for their testing going forward. If your hierarchy structures and data are not sensitive, then you may choose to select “Yes.”  My personal preference is to select “No” and provide the static/current moment in time archived snapshot within the SR . When the SR is closed, the contents of said snapshot will be archived and not used for further regression testing.

  1. Generate a Diagnostic Report (UDR) ID

When clicking the “Submit” button on this screen, a unique alphanumeric reference is generated. This reference will be required when submitting an SR or ER on the Oracle Cloud Support website. Write down or, preferably, copy and save this UDR string of characters on your workstation in a txt file.

  1. Log in to the Oracle Cloud Support website and proceed with opening a new Cloud SR/ER.

Select the “Create Service Request” button on the lower left-hand side of your screen.

PCMCS Image 6

Select “Service Type” from the drop-down list of available Cloud services to which your user has access.

PCMCS Image 7

Once you have selected “Oracle Hyperion Profitability and Cost Management Cloud Service,” a listing of all available instances will be displayed in the new “Service Name” section:

PCMCS Image 8

Make sure you select the appropriate “Service Name” with the instance where you generated the related UDR (see previous steps).

Add “Problem Type” and select based on the type closest to your issue:

PCMCS Image 9

The above choices will not trigger related content or a list of options – this is merely to ensure that the ticket goes to the appropriate team during the investigation process.

In the “Problem Summary” section, reference the Cloud product for which you are creating the SR or ER. This will be the subject of your ticket, and it will help administer and keep track of multiple tickets at the same time.

  1. Attach all System Reports available for your PCMCS app.

To avoid multiple back and forth email exchanges with the Oracle Support staff, provide them with all the available information. Here is a current list of all available reports for troubleshooting PCMCS applications.

  1. Execution statistics for the last model / allocation execution connected to the SR – if SR is related to calc performance, calc troubleshooting or rule setup. (PDF or XLS format preferable)
  2. Program Documentation (with details; not with aliases) (XLS or PDF format preferable)
  3. Dimension Statistics (PDF format preferable)
  4. POV Statistics (PDF or XLS format preferable)

All these reports can be generated from PCMCS – Navigator menu – System reports.

PCMCS Image 10

  1. Attach the Diagnostic report

From the “Navigator” Menu, select “Application,” click on the drop-down in “Actions” and select “Export Supplemental Diagnostics.” This report is very useful to the development team troubleshooting your issue.

PCMCS Image 11

When selecting this report, a new job will be launched that can take anywhere between a couple of minutes to 20+ minutes, depending on the size of your application and the amount of logging involved.

An archive of the diagnostics reports will be generated in the File Explorer within the Application menu.  Some of the reports in this archive will be a repeat of the other reports mentioned in the previous step, but if you provide all this information simultaneously, the redundancy should not cause any issues. If you are not open to launching such process in your environment during business hours, and yet you still want to submit the SR in a timely fashion, you can skip this step and provide this report only upon request from Oracle Support staff.

  1. Error description

If you can replicate the error, capture each step via screenshots and save them in a Word doc. The earlier the support staff understands what you are dealing with, the faster the entire troubleshooting process will be completed.

Refer to menu options precisely as what they are called within PCMCS.

For example, to submit an SR or ER related to the Calculation Rules menu, refer to it as Calculation Rules – Rules Express Editing, as both names appear in the PCMCS menu.

PCMCS Image 12

  1. Establish the SR level appropriately.

There are 4 options to choose from, and you should choose based on urgency as well as level of importance.

PCMCS Image 13

Choose severity 1 and 2 only when applicable. You may be inclined to select such severity options so that your issue is resolved quickly, but use your own criteria to distinguish between something that is really a show stopper and something that is not. Time is of the essence for both you and the Oracle Development team.

When choosing severity 1, you will open your calendar for potential phone calls that can occur at any time, regardless of your time zone.

  1. My request is really an ER, not an SR.

If your SR is an Enhancement Request, provide a lot of supporting detail in the “Business Justification” section. Not doing so will delay the Enhancement Request submission by up to 2 weeks. If further business justification is requested, respond promptly to make things move along and ensure that your request makes it to the next patch release sooner rather than later.

Once an Enhancement Request is recorded, your SR will be updated with the ER ID (which will differ from the SR ID originally assigned the moment you submitted the ticket).  The original SR will be closed, and you can open a new SR quoting the ER ID 48 hours after the moment your request was accepted. The Support staff will confirm whether the ER will make it in the next monthly patch release.

  1. Bedside manners for SR/ER submitters.

Try to reduce the number of communications within the SR. Taking the above steps will get you closer to achieving a near-perfect SR submission. Be mindful about how to communicate efficiently. The higher the amount of back-and-forth communication, the more difficult it will be for the development team to follow the conversation trail and ensure efficient troubleshooting.

Whether you are a service provider or a PCMCS administrator who inherited an application at the end of a project implementation, we all tap the same Oracle Support resources which are, as are most things, finite. The more efficient your SR/ER submission is, the faster these resources provide a response with accurate and detailed troubleshooting steps. For any time-sensitive issues or further escalation, leverage your Oracle representative and your implementation partner. Their existing relationship with Oracle Product Management will help direct your query to the right resources and ensure your SR is not stuck because of lack of clarity regarding which team should own it. This will ensure that your SR/ER is fast-tracked to the appropriate team and given the right level of attention. For any critical issues you encounter with PCMCS or other Cloud subscriptions where there is no solution in sight, reach out to Alithya at infosolutions@alithya.com so that our team can provide a fast end effective assessment.

Oracle Business Intelligence Cloud Service (BICS) September Update

The latest upgrade for BICS happened last week and, while there are no new end user features, it is now easier to integrate data. New to this version is the ability to connect to JDBC data sources through the Data Sync tool.  This allows customers to set up automated data pulls from Salesforce, Redshift, and Hive among others.  In addition to these connections, Oracle RightNow CRM customers have the ability to pull directly from RightNow reports using Oracle Data Sync.  Finally, connections to on premise databases and BICS can be secured using Secure Socket Layer (SSL) certifications.

After developing a customer script using API calls to pull data from Salesforce, I am excited about the ability to connect directly to Salesforce with Data Sync. Direct connections to the Salesforce database allows you to search and browse for relevant tables and import the definitions with ease:

blog

Once the definitions have been imported, standard querying clauses can create the ability to include only relevant data, perform incremental ETLs, and further manipulate the data.

While there are no new features for end users, this is a powerful update when it comes to data integration. Using APIs to extract data from Salesforce meant that each extraction query had to be written by hand which was time consuming and prone to error.  With these new data extraction processes, BICS implementations and integrating data becomes much faster, furthering the promise of Oracle Cloud technologies.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

The Role of Oracle Data Visualizer in the Modern Enterprise

Chess as a metaphor for strategic competition is not a novel concept, and it remains one of the most respected due to the intellectual and strategic demand it places on competitors. The sheer combination of moves in a chess game (estimated to be more than the number of atoms in the universe) means that it is entirely possible that no two people have unintentionally played the same game.  Of course, many of these combinations result in a draw and many more set a player down the path of an inevitable loss after only a few moves.  It is no surprise that chess has pushed the limits of computational analytics which in turn has pushed the limits of players.  Claude Shannon, the father of information theory, was the first to state the advantages of the human and computer competitor attempting to wrest control of opposing kings from each other:

The computer is:

  1. Very fast at making calculations;
  2. Unable to make mistakes (unless the mistakes are part of the programmatic DNA);
  3. Diligent in fully analyzing a position or all possible moves;
  4. Unemotional in assessing current conditions and unencumbered by prior wins or losses.

The human, on the other hand, is:

  1. Flexible and able to deviate from a given pattern (or code);
  2. Imaginative;
  3. Able to reason;
  4. Able to learn [1].

The application of business analytics is the perfect convergence of this chess metaphor, powerful computations, and the people involved. Of course, the chess metaphor breaks down a bit since we have human and machine working together against competing partnerships of humans and machines (rather than human against machine).

Oracle Business Intelligence (along with implementation partners such as Edgewater Ranzal) has long provided enterprises with the ability to balance this convergence. Regardless of the robustness of the tool, the excellence of the implementation, the expertise of the users, and the responsiveness of the technical support team, there has been one weakness:  No organization can resolve data integration logic mistakes or incorporate new data as quickly as users request changes.  As a result, the second and third computer advantages above are hindered.  Computers making mistakes due to their programmatic DNA will continue to make these mistakes until corrective action can be implemented (which can take days, weeks, or months).  Likewise, all possible positions or moves cannot be analyzed due to missing data elements.  Exacerbating the problem, all of the human advantages stated previously can be handicapped; increasingly so depending on the variability, robustness, and depth of the missing or wrongly calculated data set.

With the introduction of Visual Analyzer (VA) and Data Visualization (DV), Oracle has made enormous strides in overcoming this weakness. Users now have the ability to perform data mashups between local data and centralized repositories of data such as data warehouses/marts and cubes.  No longer does the computer have to make data analysis without the availability of all possible data.  No longer does the user have to make educated guesses about how centralized and localized data sets correlate and how it will affect overall trends or predictions.  Used properly, users and enterprises can leverage VA/DV to iteratively refine and redefine the analytical component that contributes to their strategic goals.  Of course, all new technologies and capabilities come with their own challenges.

The first challenge is how an organization can present these new views of data and compare and contrast them with the organizational “one version of the truth”. Enterprise data repositories are a popular and useful asset because they enable organizations to slice, dice, pivot, and drill down into this centralized data while minimizing subjectivity.  Allowing users to introduce their own data creates a situation where they can increase data subjectivity.  If VA/DV is to be part of your organization’s analytics strategy, processes must be in place to validate the result of these new data models.  The level of effort that should be applied to this validation should increase according to the following factors:

  • The amount of manual manipulation the user performed on the data before performing the mashup with existing data models;
  • The reputability of the data source. Combining data from an internal ERP or CRM system is different from downloading and aligning outside data (e.g. US Census Bureau or Google results);
  • The depth and width of data. In layman’s terms, this corresponds to how many rows and columns (respectively) the data set has;
  • The expertise and experience of the individual performing the data mashup.

If you have an existing centralized data repository, you have probably already gone through data validation exercises. Reexamine and apply the data and a metadata governance processes you went through when the data repository was created (and hopefully maintained and updated).

The next challenge is integrating the data into the data repository. Fortunately, users may have already defined the process of extracting and transforming data when they assembled the VA/DV project.  Evaluating and leveraging the process the user has already defined can shorten the development cycle for enhancing existing data models and the Extract, Transform, and Load (ETL) process.  The data validation factors above can also provide a rough order of magnitude of the level of effort needed to incorporate this data.  The more difficult task may be determining how to prioritize data integration projects within an (often) overburdened IT department.  Time, scope, and cost are familiar benchmarks when determining prioritization, but it is important to take revenue into account.  Organizations that have become analytics savvy and have users demanding VA/DV data mashup capabilities have often moved beyond simple reporting and onto leveraging data to create opportunities.  Are salespeople asking to incorporate external data to gain customer insight?  Are product managers pulling in data from a system the organization never got around to integrating?  Are functional managers manipulating and re-integrating data to cut costs and boost margins?

To round out this chess metaphor, a game that seems to be nearly a draw or a loss can breathe new life by promoting a pawn to a lost queen. Many of your competitors already have a business intelligence solution; your organization can only find data differentiation through the type of data you have and how quickly it can be incorporated at an enterprise level.  Providing VA/DV to the individuals within your organization with a deep knowledge of the data they need, how to get it, and how to deploy it can be the queen that checkmates the king.

[1] Shannon, C. E. (1950). XXII. Programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314), 256-275. doi:10.1080/14786445008521796

Upgrading Oracle Business Intelligence to 12c

Ranzal was recently invited to participate in a number of chalk talks for the Healthcare Industry User Group (HIUG) in San Antonio, TX. One of these chalk talks covered how an organization should prepare for and execute an upgrade to Oracle Business Intelligence (OBI) 12c.  Since the technical steps are already covered in numerous blog posts as well as Oracle documentation, our conversation focused on a strategic approach to the upgrade.  Our conversation essentially came down to four topics:

  1. An overview of the simplicity of the upgrade from 11g to 12c
  2. Organizational mindset while preparing for the upgrade
  3. The new technical infrastructure and the implications for the organization
  4. How to introduce users to the new features and how this might impact governance

We wanted to formally document this lively and fast-paced discussion to help other organizations as well as the HIUG chalk talk participants who were furiously scribbling notes and may not have had the opportunity to take it all in.

We started our discussion with how relatively easy Oracle has made this upgrade. For those of you who experienced the difficult and buggy process of upgrading from OBI 10g to 11g and may be dreading the upgrade to 12c, we have some advice:  relax.  First, the upgrade to 12c is an “in place upgrade” which means your 11g environment remains intact while the metadata and configuration gets “lifted and shifted” into 12c.  Speaking of “lift and shift,” 12c comes with a tool that extracts the metadata and much of the configuration from 11g into one tidy package that is then pushed into 12c.  There is a small amount of manual configuration that has to occur; however, this will only slow down customers with highly customized environments.  Once this lift and shift has occurred, an Oracle validation tool checks that your Dashboards and Analysis are working and alerts you to potential issues.  While there are bugs with 12c (what new or old software does not have bugs?), we have not found any major issues that will cause a full stop for an organization.

So how does an organization prepare for this upgrade? First, we encourage clients to view this upgrade as an opportunity to “clean house,” especially for customers that have been building OBI assets for years.  Used properly, analytic tools lend themselves to experimentation and evolution.  Experimentation can result in partially formed or broken logical objects such as presentation columns and facts, Analysis, or entire Dashboards.  The developers of these objects have the best intention of coming back to either fix or complete development on these objects or, at the very least, delete them.  Developers are busy people though, and eventually the existence of these objects is forgotten.  Evolution of both the tool and the focus on analytics results in objects becoming stale and/or obsolete.  Take this opportunity to clean house on your OBI environment.  Run the consistency check in the BI Administration tool and resolve those lingering issues.  Evaluate your usage statistics and determine if unused Analysis and Dashboards are still needed.  Fix or discard broken Analysis and Dashboards.  As with any technology tool, have a process in place to document, communicate, archive, backup, and restore, if necessary.

The most substantial change to come with OBI 12c is the underlying technical architecture. Fusion Middleware and Enterprise Management has a new look and feel.  Additionally, some actions are no longer performed within these tools.  For instance, deploying the RPD is no longer done through Enterprise Manager (in fact, the RPD is now the BAR file).  The directory has significantly changed which is a bit unfortunate for those of us who like to go directly to the log files in the directory rather than relying on Enterprise Manager and Session Manager.  Finally, many of the RPD . . . that is . . . BAR functions, such as deploying and copying, are done through a new command line interface.  So, while most of your users will be able to log into 12c and quickly adapt to the new look and feel, your OBI support team will have some learning to do.  Again, the goal of this post is to provide an overarching upgrade strategy, so we will not delve any deeper into these changes.  There is plenty of quality content online regarding these changes and you should always review Oracle documentation before performing implementation or upgrades.

End users will initially feel that, with the exception of a new look and feel, nothing much has changed. However, with new graphing options, statistical analytics capabilities based on R, as well as Visual Analyzer, users have an opportunity to expand their analytical capacity.  The organizational challenge is to go back to the change management playbook that was used when OBI was initially introduced, re-evaluate, and update so that end users can get the most out of this upgrade.  Evaluate how to train users on where to properly use the new graphs and charts.  Determine (or re-determine) who your power users are who need or want the new statistical capabilities.  Review existing Dashboards and Analysis and make appropriate upgrades.

Potentially the biggest challenge will be evaluating and understanding the capabilities of the new Visual Analyzer tool which, among other features, allows you to perform data mashups. This new tool will require that your organization determines some use cases and user groups as well as some additional training.  While users uploading data into the OBI system and combining it with existing data models opens up entirely new possibilities for insight, it also creates a governance challenge.  How do you separate and maintain the organizational “one version of the truth” while encouraging and properly promoting new analytic insight?  How will security be handled and users trained to adhere to this model?  How will you handle the archiving and deletion of potentially huge numbers of Excel spreadsheets uploaded onto the OBI server?  While this all sounds intimidating, keep in mind that your organization has already been through these exercises once during the original OBI implementation.  Adapt your existing knowledge.

Thus far, Ranzal has had a positive experience with the 12c upgrade. The underlying technical architecture has resulted in some real gains in performance, especially when leveraging EPM as a data source.  The upgrade is well thought out and simple, especially if you go through a system checkup and resolve issues.  While your technical BI support team will have some homework and learning to do to continue to fill that role, your users will be able to jump right into using 12c.  Despite this ease of user adoption, be sure to have a change management plan in place and take advantage of the new features and capabilities of 12c.

If you are thinking of doing an upgrade and have questions, feel free to reach out to us. Also, keep an eye out for an upcoming webcast on upgrading to 12c with an interactive question and answer session.

Oracle Business Intelligence – Synchronizing Hierarchical Structures to Enable Federation

More and more Oracle customers are finding value in federating their EPM cubes with existing relational data stores such as data marts and data warehouses (for brevity, data warehouse will refer to all relational data stores). This post explains the concept of federation, explores the consequences of allowing hierarchical structures to get out of synchronization, and shares options to enable this synchronization.

In OBI, federation is the integration of distinct data sources to allow end users to perform analytical tasks without having to consider where the data is coming from. There are two types of federation to consider when using EPM and data warehouse sources:  vertical and horizontal.  Vertical federation allows users to drill down a hierarchy and switch data sources when moving from an aggregate data source to a more detailed one.  Most often, this occurs in the Time dimension whereby the EPM cube stores data for year, quarter, and month, and the relational data sources have details on daily transactions.  Horizontal federation allows users to combine different measures from the distinct data sources naturally in an OBI analysis, rather than extracting the data and building a unified report in another tool.

Federation makes it imperative that the common hierarchical structures are kept in sync. To demonstrate issues that can occur during vertical federation when the data sources are not synchronized, take the following hierarchies in an EMP application and a data warehouse:

Figure 1: Unsynchronized Hierarchies

Jason Hodson Blog Figure 1.jpg

Notice that Colorado falls under the Western region in the EPM application, but under the Southwestern region in the data warehouse. Also notice that the data warehouse contains an additional level (or granularity) in the form of cities for each region.  Assume that both data sources contain revenue data.  An OBI analysis such as this would route the query to the EPM cube and return these results:

Figure 2: EPM Analysis – Vertical Federation

Jason Hodson Blog Figure 2

However, if the user were to expand the state of Washington to see the results for each city, OBI would route the query to the data warehouse. When the results return, the user would be confronted with different revenue figures for the Southwest and West regions:

Figure 3: Data Warehouse – Vertical Federation

Jason Hodson Blog Figure 3

When the hierarchical structures are not aligned between the two data sources, irreconcilable differences can occur when switching between the sources. Many times, end users are not aware that they are switching between EPM and a data warehouse, and will simply experience a confusing reorganization in their analysis.

To demonstrate issues that occur in horizontal federation, assume the same hierarchies as in Figure 1 above, but the EPM application contains data on budget revenue while the data warehouse contains details on actual revenue. An analysis such as this could be created to query each source simultaneously and combine the budget and actual data along the common dimension:

Figure 4: Horizontal Federation

Jason Hodson Blog Figure 4

However, drilling into the West and Southwest regions will result in Colorado becoming an erroneously “shared” member:

Figure 5: Colorado as a “Shared” Member

Jason Hodson Blog Figure 5

In actuality, the mocked up analysis above would more than likely result in an error since OBI would not be able to match the hierarchical structures during query generation.

There are a number of options to enable the synchronization of hierarchical structures across EPM applications and data warehouses. Many organizations are manually maintaining their hierarchical structures in spreadsheets and text files, often located on an individual’s desktop.  It is possible to continue this manual maintenance; however, these dispersed files should be centralized, a governance processes defined, and the EPM metadata management and data warehouse ETL process redesigned to pick up these centralized files.  This method is still subject to errors and is inherently difficult to properly govern and audit.  For organizations that are already using Enterprise Performance Management Architect (EPMA), a scripting process can be implemented that extracts the hierarchical structures in flat files.  A follow on ETL process to move these hierarchies into the data warehouse will also have to be implemented.

The best practices solution is to use Hyperion Data Relationship Management (DRM) to manage these hierarchical structures. DRM boasts robust metadata management capabilities coupled with a system-agnostic approach to exporting this metadata.  DRM’s most valuable export method allows pushing directly to a relational database.  If a data warehouse is built in tandem with an EPM application, DRM can push directly to a dimensional table that can then be accessed by OBI.  If there is a data warehouse already in place, existing ETL processes may have to be modified or a dimensional table devoted to the dimension hierarchy created.  Ranzal has a DRM accelerator package to enable the synchronization of hierarchical structures between EPM and data warehouses that is designed to work with our existing EPM application DRM implementation accelerators.  Using these accelerators, Ranzal can perform an implementation in as little as six weeks that provides metadata management for the EPM application, establishes a process for maintaining hierarchical structure synchronization between EPM and the data warehouse, and federation of the data source.

While the federation of EPM and data warehouse sources has been the primary focus, it is worth noting that two EPM cubes or two data warehouses could be federated in OBI. For many of the reasons discussed previously, data synchronization processes will have to be in place to enable this federation.  The previous solutions for maintaining metadata synchronization may be able to be adapted to enable this federation.

The federation of EPM and data warehouse sources allows an enterprise to create a more tightly integrated analytical solution. This tight integration allows users to transverse the organization’s data, gain insight, and answer business essential questions at the speed of thought.  As demonstrated, mismanaging hierarchical structures can result in an analytical solution that produces unexpected results that can harm user confidence.  Enterprise solutions often need enterprise approaches to governance; therefore, it is often imperative to understand and address shortcomings in hierarchical structure management.  Ranzal has a deep knowledge of EPM, DRM, and OBIEE, and how these systems can be implemented to tightly work together to address an organization’s analytical and reporting needs.

Using Data Visualization and Usability to enhance end user reporting – Part 4: Tying it all together

Now that the foundations have been set in my last three posts, in this final post I’ll share how we can create reports, leveraging:

• Standard definitions and metrics
• The understanding of how users  will consume data and interact with the system

To effectively create reports, make sure to follow these key best practices:

1. Reduce the data presented by focusing on the important information. For example, rather than showing two lines for revenue actuals and revenue budget, try showing one for the difference. Users can identify trends much more quickly when there are fewer objects to focus on.

2. Concentrate on important data and consolidate it into chunks. If you have two charts, use the same color for revenue on both of them. This makes it easier to interpret and see trends between them

3. Remove non-data items, especially the images, unnecessary lines and graphics. This helps the user focus on the actual data, so they can see trends and information rather than clutter.

Here is an example of two reports with the same data. The first provides a table with various colors, bold fonts and line. The second report highlights the important areas/regions. Your eyes are immediately drawn to those areas needing attention. Table two allows the user to draw accurate conclusions more effectively and in a much shorter timeframe.

These are some general practices which can be applied in most cases and will give users a much more positive experience with your reporting system. If you need help making sense of your reporting requirements, creating a coherent reporting strategy or implementing enterprise reporting, please contact us at info@ranzal.com.

Using Data Visualization and Usability to Enhance End User Reporting – Part 3: The Balance between Data and Visual Appeal

In part three of my blog series, I’ll provide an overview of the important balance between data and visual appeal when creating reports, including some of the latest research and findings.

Many users believe that once you have the metrics in place and understand what data users want, the next step is to create the reports.

In reality, a lot of thought and a careful eye are required when making design considerations to create charts, grids and tables that convey the details in the simplest terms for user understanding. The right design choices enable users to see easily the trend, outliers, or items needing attention.

Many people think that the more data they can cram in, the better. However, studies have shown that the average person can only store 6 chunks of information at a time.  Depending on how flashy and distracting your graphics and marketing logos are, you may have already used up half of your brain’s capacity, without getting to any reports or dashboards.

Graphic overload may make one consider removing all distracting graphics, highlights, bolds and visual clutter to show the data – novel concept right?

But this is not the solution. There has been lots of visualization studies and research done over the past century that have uncovered that eliminating graphics altogether is not the solution to this dilemma.

In fact, there are several leading experts on this topic, including three key people, who are leading the charge against clutter and visual distraction, cheering for more measured and thoughtful chart and dashboard visual design. These individuals are:

·         Edward R. Tufte

·         Colin Ware

·         Stephen Few

All three have published several books explaining how we interpret visual data, including what makes our eyes drawn to color and form, and what aids understanding. It also explains “chart junk” – a term first coined by Tufte in 1983. Tufte defines “chart junk” as simply:

Conventional graphic paraphernalia routinely added to every display that passes by: over-busy grid lines and excess ticks, redundant representations of the simplest data, the debris of computer plotting, and many of the devices generating design variation.”

The key concept of “chart junk” leads into another of Tufte’s mantras called the “Data Ink” ratio. The idea here is that by minimizing the non-data ink you are maximizing the data ink.  In other words,  that you can achieve the ideal balance of data and design by removing borders, underlines, shading and other ink elements which don’t convey any messages

There are a lot of available resources out there on this topic by these authors and others.

Stay tuned for my final blog post, in which I will demonstrate how to effectively put these concepts  into practice when creating reports.

Using Data Visualization and Usability to Enhance End User Reporting – Part 2: Usability

In this second part of my blog series, I’ll be looking at usability and what it really means for report design.

Usability takes a step back and looks at the interactions users have with reports. This includes how users actually use the reports, what they do next, and where they go. If users refer to another report to compare values or look at trends, they should think about condensing these reports into a single report or even create a dashboard report with key metrics. This way, users have a clear vision of what they need or what Oracle calls “actionable insight”. From there, users can provide other users with guided navigation paths based on where they actually go today.

With improved usability, users can review an initial report and easily pull up additional reports, possibly from a different system or by logging into the general ledger/order entry system to find the detail behind the values/volumes. With careful design, this functionality can be built into reporting and planning applications, to provide a single interface and simplify the user interactions.

Here is a real world example of how improved usability can benefit users on a daily basis: Often a user will open a web browser and an item is highlighted as a clickable link. Normally if you click on the link, it will open up in the same window, causing you to lose the original site that you visited. By clicking the back button, you can also lose the first site that you visited. With improved usability, clicking on a link would result in a new pop-up window, so when finished users are able to choose which windows to close and return to the original window.

The challenge with achieving improved usability, is that many organizations lack visibility into how users actually use reports, especially with users spread all over the world. One possible solution is for organizations to ask users about their daily activities. The issue here is that often users are uncomfortable discussing what they do and where they go online. Companies can overcome this challenge by enforcing sessions where they can ask leading questions including why users feel uncomfortable sharing their daily activities. These types of sessions can help organizations uncover the root causes/issues, giving them the insight to delve deeper to understand what lies behind the report request.

One common scenario where you could apply this approach is when users ask for a full P&L for their business units, so they can compare and ring anyone over budget.  By having a session to understand the users’ specific needs/daily activities, organizations can instead produce a dashboard that highlights the discrepancies by region. With this dashboard, there is no need to compare and analyze; users can open the dashboard and see the indicators with a click of a button. Users can drill down for more information while placing that call!

In conclusion, improved usability means helping users get to the answer quicker, without having to do a lot of unnecessary steps. The old adage is true – KISS – Keep It Simple Stupid!

Using Data Visualization and Usability to Enhance End User Reporting – Part 1: Introduction

Throughout my experience on client engagements, I’ve encountered a common issue: reports. In Part 1 of my blog I will address the reporting challenge, highlight some key benefits of standardized reporting, and outline an approach for implementing a standard enterprise reporting system.

On some engagements, clients want to reproduce the same old reports they had in the previous system, assuming that if users were complacent before, they would be happy with the same reports going forward. This is not the case. On a recent project, the client was in the midst of replacing a 10 year old system. Several business users were not happy at the prospect of continuing with reports that were created a decade ago!   Other clients think that because the finance team has been getting the same reports for the past few years, that’s all they need.  This is another incorrect assumption. In many cases these reports are not being utilized and users may even be using Excel to manipulate and turn the reports into something more useful.  So why would you give users the same old reports, when, with a bit of foresight and planning, you can give them reports which enhance the way they do business and actually make it easier for them?

Some of the key benefits of an enterprise reporting system are:

  • Single version of the truth – everyone has the same revenue /cogs/opex numbers
  • Analysts have more time to analyze data and trends rather than consolidate data to make reports

And how do you enhance reporting and deliver added value to users?

To provide users with the necessary reports, it actually takes a multi-disciplined approach, focusing on usability and data visualization.  This assumes you have created the back end databases with appropriate structures to support your reporting needs.   It’s also critical to have a single definition for accounts and key metrics. This makes a big difference in reporting and getting everyone aligned to the single version of the truth you are about to create.

In Part 2 of my blog, I’ll look at usability and what it actually means for report design.