Oracle Business Intelligence Cloud Service (BICS) September Update

The latest upgrade for BICS happened last week and, while there are no new end user features, it is now easier to integrate data. New to this version is the ability to connect to JDBC data sources through the Data Sync tool.  This allows customers to set up automated data pulls from Salesforce, Redshift, and Hive among others.  In addition to these connections, Oracle RightNow CRM customers have the ability to pull directly from RightNow reports using Oracle Data Sync.  Finally, connections to on premise databases and BICS can be secured using Secure Socket Layer (SSL) certifications.

After developing a customer script using API calls to pull data from Salesforce, I am excited about the ability to connect directly to Salesforce with Data Sync. Direct connections to the Salesforce database allows you to search and browse for relevant tables and import the definitions with ease:

blog

Once the definitions have been imported, standard querying clauses can create the ability to include only relevant data, perform incremental ETLs, and further manipulate the data.

While there are no new features for end users, this is a powerful update when it comes to data integration. Using APIs to extract data from Salesforce meant that each extraction query had to be written by hand which was time consuming and prone to error.  With these new data extraction processes, BICS implementations and integrating data becomes much faster, furthering the promise of Oracle Cloud technologies.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

The Role of Oracle Data Visualizer in the Modern Enterprise

Chess as a metaphor for strategic competition is not a novel concept, and it remains one of the most respected due to the intellectual and strategic demand it places on competitors. The sheer combination of moves in a chess game (estimated to be more than the number of atoms in the universe) means that it is entirely possible that no two people have unintentionally played the same game.  Of course, many of these combinations result in a draw and many more set a player down the path of an inevitable loss after only a few moves.  It is no surprise that chess has pushed the limits of computational analytics which in turn has pushed the limits of players.  Claude Shannon, the father of information theory, was the first to state the advantages of the human and computer competitor attempting to wrest control of opposing kings from each other:

The computer is:

  1. Very fast at making calculations;
  2. Unable to make mistakes (unless the mistakes are part of the programmatic DNA);
  3. Diligent in fully analyzing a position or all possible moves;
  4. Unemotional in assessing current conditions and unencumbered by prior wins or losses.

The human, on the other hand, is:

  1. Flexible and able to deviate from a given pattern (or code);
  2. Imaginative;
  3. Able to reason;
  4. Able to learn [1].

The application of business analytics is the perfect convergence of this chess metaphor, powerful computations, and the people involved. Of course, the chess metaphor breaks down a bit since we have human and machine working together against competing partnerships of humans and machines (rather than human against machine).

Oracle Business Intelligence (along with implementation partners such as Edgewater Ranzal) has long provided enterprises with the ability to balance this convergence. Regardless of the robustness of the tool, the excellence of the implementation, the expertise of the users, and the responsiveness of the technical support team, there has been one weakness:  No organization can resolve data integration logic mistakes or incorporate new data as quickly as users request changes.  As a result, the second and third computer advantages above are hindered.  Computers making mistakes due to their programmatic DNA will continue to make these mistakes until corrective action can be implemented (which can take days, weeks, or months).  Likewise, all possible positions or moves cannot be analyzed due to missing data elements.  Exacerbating the problem, all of the human advantages stated previously can be handicapped; increasingly so depending on the variability, robustness, and depth of the missing or wrongly calculated data set.

With the introduction of Visual Analyzer (VA) and Data Visualization (DV), Oracle has made enormous strides in overcoming this weakness. Users now have the ability to perform data mashups between local data and centralized repositories of data such as data warehouses/marts and cubes.  No longer does the computer have to make data analysis without the availability of all possible data.  No longer does the user have to make educated guesses about how centralized and localized data sets correlate and how it will affect overall trends or predictions.  Used properly, users and enterprises can leverage VA/DV to iteratively refine and redefine the analytical component that contributes to their strategic goals.  Of course, all new technologies and capabilities come with their own challenges.

The first challenge is how an organization can present these new views of data and compare and contrast them with the organizational “one version of the truth”. Enterprise data repositories are a popular and useful asset because they enable organizations to slice, dice, pivot, and drill down into this centralized data while minimizing subjectivity.  Allowing users to introduce their own data creates a situation where they can increase data subjectivity.  If VA/DV is to be part of your organization’s analytics strategy, processes must be in place to validate the result of these new data models.  The level of effort that should be applied to this validation should increase according to the following factors:

  • The amount of manual manipulation the user performed on the data before performing the mashup with existing data models;
  • The reputability of the data source. Combining data from an internal ERP or CRM system is different from downloading and aligning outside data (e.g. US Census Bureau or Google results);
  • The depth and width of data. In layman’s terms, this corresponds to how many rows and columns (respectively) the data set has;
  • The expertise and experience of the individual performing the data mashup.

If you have an existing centralized data repository, you have probably already gone through data validation exercises. Reexamine and apply the data and a metadata governance processes you went through when the data repository was created (and hopefully maintained and updated).

The next challenge is integrating the data into the data repository. Fortunately, users may have already defined the process of extracting and transforming data when they assembled the VA/DV project.  Evaluating and leveraging the process the user has already defined can shorten the development cycle for enhancing existing data models and the Extract, Transform, and Load (ETL) process.  The data validation factors above can also provide a rough order of magnitude of the level of effort needed to incorporate this data.  The more difficult task may be determining how to prioritize data integration projects within an (often) overburdened IT department.  Time, scope, and cost are familiar benchmarks when determining prioritization, but it is important to take revenue into account.  Organizations that have become analytics savvy and have users demanding VA/DV data mashup capabilities have often moved beyond simple reporting and onto leveraging data to create opportunities.  Are salespeople asking to incorporate external data to gain customer insight?  Are product managers pulling in data from a system the organization never got around to integrating?  Are functional managers manipulating and re-integrating data to cut costs and boost margins?

To round out this chess metaphor, a game that seems to be nearly a draw or a loss can breathe new life by promoting a pawn to a lost queen. Many of your competitors already have a business intelligence solution; your organization can only find data differentiation through the type of data you have and how quickly it can be incorporated at an enterprise level.  Providing VA/DV to the individuals within your organization with a deep knowledge of the data they need, how to get it, and how to deploy it can be the queen that checkmates the king.

[1] Shannon, C. E. (1950). XXII. Programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314), 256-275. doi:10.1080/14786445008521796

Oracle Business Intelligence – Synchronizing Hierarchical Structures to Enable Federation

More and more Oracle customers are finding value in federating their EPM cubes with existing relational data stores such as data marts and data warehouses (for brevity, data warehouse will refer to all relational data stores). This post explains the concept of federation, explores the consequences of allowing hierarchical structures to get out of synchronization, and shares options to enable this synchronization.

In OBI, federation is the integration of distinct data sources to allow end users to perform analytical tasks without having to consider where the data is coming from. There are two types of federation to consider when using EPM and data warehouse sources:  vertical and horizontal.  Vertical federation allows users to drill down a hierarchy and switch data sources when moving from an aggregate data source to a more detailed one.  Most often, this occurs in the Time dimension whereby the EPM cube stores data for year, quarter, and month, and the relational data sources have details on daily transactions.  Horizontal federation allows users to combine different measures from the distinct data sources naturally in an OBI analysis, rather than extracting the data and building a unified report in another tool.

Federation makes it imperative that the common hierarchical structures are kept in sync. To demonstrate issues that can occur during vertical federation when the data sources are not synchronized, take the following hierarchies in an EMP application and a data warehouse:

Figure 1: Unsynchronized Hierarchies

Jason Hodson Blog Figure 1.jpg

Notice that Colorado falls under the Western region in the EPM application, but under the Southwestern region in the data warehouse. Also notice that the data warehouse contains an additional level (or granularity) in the form of cities for each region.  Assume that both data sources contain revenue data.  An OBI analysis such as this would route the query to the EPM cube and return these results:

Figure 2: EPM Analysis – Vertical Federation

Jason Hodson Blog Figure 2

However, if the user were to expand the state of Washington to see the results for each city, OBI would route the query to the data warehouse. When the results return, the user would be confronted with different revenue figures for the Southwest and West regions:

Figure 3: Data Warehouse – Vertical Federation

Jason Hodson Blog Figure 3

When the hierarchical structures are not aligned between the two data sources, irreconcilable differences can occur when switching between the sources. Many times, end users are not aware that they are switching between EPM and a data warehouse, and will simply experience a confusing reorganization in their analysis.

To demonstrate issues that occur in horizontal federation, assume the same hierarchies as in Figure 1 above, but the EPM application contains data on budget revenue while the data warehouse contains details on actual revenue. An analysis such as this could be created to query each source simultaneously and combine the budget and actual data along the common dimension:

Figure 4: Horizontal Federation

Jason Hodson Blog Figure 4

However, drilling into the West and Southwest regions will result in Colorado becoming an erroneously “shared” member:

Figure 5: Colorado as a “Shared” Member

Jason Hodson Blog Figure 5

In actuality, the mocked up analysis above would more than likely result in an error since OBI would not be able to match the hierarchical structures during query generation.

There are a number of options to enable the synchronization of hierarchical structures across EPM applications and data warehouses. Many organizations are manually maintaining their hierarchical structures in spreadsheets and text files, often located on an individual’s desktop.  It is possible to continue this manual maintenance; however, these dispersed files should be centralized, a governance processes defined, and the EPM metadata management and data warehouse ETL process redesigned to pick up these centralized files.  This method is still subject to errors and is inherently difficult to properly govern and audit.  For organizations that are already using Enterprise Performance Management Architect (EPMA), a scripting process can be implemented that extracts the hierarchical structures in flat files.  A follow on ETL process to move these hierarchies into the data warehouse will also have to be implemented.

The best practices solution is to use Hyperion Data Relationship Management (DRM) to manage these hierarchical structures. DRM boasts robust metadata management capabilities coupled with a system-agnostic approach to exporting this metadata.  DRM’s most valuable export method allows pushing directly to a relational database.  If a data warehouse is built in tandem with an EPM application, DRM can push directly to a dimensional table that can then be accessed by OBI.  If there is a data warehouse already in place, existing ETL processes may have to be modified or a dimensional table devoted to the dimension hierarchy created.  Ranzal has a DRM accelerator package to enable the synchronization of hierarchical structures between EPM and data warehouses that is designed to work with our existing EPM application DRM implementation accelerators.  Using these accelerators, Ranzal can perform an implementation in as little as six weeks that provides metadata management for the EPM application, establishes a process for maintaining hierarchical structure synchronization between EPM and the data warehouse, and federation of the data source.

While the federation of EPM and data warehouse sources has been the primary focus, it is worth noting that two EPM cubes or two data warehouses could be federated in OBI. For many of the reasons discussed previously, data synchronization processes will have to be in place to enable this federation.  The previous solutions for maintaining metadata synchronization may be able to be adapted to enable this federation.

The federation of EPM and data warehouse sources allows an enterprise to create a more tightly integrated analytical solution. This tight integration allows users to transverse the organization’s data, gain insight, and answer business essential questions at the speed of thought.  As demonstrated, mismanaging hierarchical structures can result in an analytical solution that produces unexpected results that can harm user confidence.  Enterprise solutions often need enterprise approaches to governance; therefore, it is often imperative to understand and address shortcomings in hierarchical structure management.  Ranzal has a deep knowledge of EPM, DRM, and OBIEE, and how these systems can be implemented to tightly work together to address an organization’s analytical and reporting needs.

ORACLE HYPERION CALC MANAGER – Part 4 – Creating RuleSets

In Part 1 of this series, we introduced Calc Manager, providing a general overview and explanation of some new terms.  In the second post we walked through the development of a Planning rule that utilized a run time prompt.  Part 3 covered templates available with Calc Manager.

In this, the final post in this series, we’ll step through the creation of a ruleset.  Rulesets are equivalent to Business Rule Sequences in Hyperion Business Rules.

We’ll begin by logging on to Hyperion Workspace and navigating to Calc Manager.  Once in Workspace, the navigation path is:  Navigate -> Administer -> Calculation Manager.

Once in Calc Manager, you’ll land on the System View tab, which appears as follows:

 

Once again, I’ll use my EPMA enabled version of my Planning app based on Sample.Basic.

To create a new ruleset, right click on the “RuleSets” node under your Planning app and select New.  You’ll be prompted to give the ruleset a name.  I’ll name mine Process_Application.  Additionally, you can change the app/database for this ruleset in this dialog box.

After I click OK,  the following screen loads:

You can display the rules available for your rule set by expanding the tree until you see the rules for your database. 

To add rules to the ruleset, simply drag and drop them onto the Ruleset Designer on the right side of the screen.

By default, the rules will run sequentially.  If you wish for rules to execute in parallel, select the RuleSet name within the RuleSet designer.  Check “Enable Parallel Execution” on the Properties tab at the bottom of the screen.

In order to run the script, save, validate, and deploy to your Planning application.

The series of posts that we’ve put together this summer were designed to give a user a basic understanding of how to work with Calculation Manager.  With any new technology, its best to dive in and immerse yourself to speed through the learning curve – Calculation Manager is no different.  Take the opportunity to experiment with the tool.  I feel that you’ll find it easy to learn the basics and before long you’ll be developing your own rules.

If you have any questions about Calc Manager, please leave a comment on any of the posts in this series, or reach out to me via email at jrichardson@ranzal.com.

ORACLE HYPERION CALC MANAGER – Part 3 – Working with Templates

In Part 1 of this series, we introduced Calc Manager, providing a general overview and explanation of some new terms.  In the second post in the series, we walked through the development of a Planning rule that utilized a run time prompt.  In this post, we’ll explore templates provided within Calc Manager.

As with the Rule Designer, which is a great tool to help less experienced developers build rules, templates provide a simple way to develop rules for basic tasks in Planning and Essbase…tasks such as copying, clearing, exporting, allocating, and aggregating data.  In addition, you can design your own templates.

We’ll begin by logging on to Hyperion Workspace and navigating to Calc Manager.  Once in Workspace, the navigation path is:  Navigate -> Administer -> Calculation Manager.

Once in Calc Manager, you’ll land on the System View tab, which appears as follows:

Once again, I’ll use my EPMA enabled version of my Planning app based on Sample.Basic.

To access predefined templates, right click on “Rules”.  Once you give the rule a name, the graphical designer is launched.  In the “Existing Objects” window, you should find a list of the pre-existing templates.  A list of the system templates follows:

CLEAR DATA

In order to use the system template to Clear Data, drag and drop “Clear Data” from the System Templates to the Rule Designer.  This will then invoke a member selection window asking you to specify the data to clear.  Keep in mind that this template generates a calc script utilizing the CLEARBLOCK command as opposed to a CLEARDATA command.

In my sample app, I select “FY11” for the Years dimension and “Final” for the Version dimension.  The dropdown box for “Clearblock Option” can be used to define the blocks to be cleared…”All” is the default.  The code that is generated appears below.

FIX ("FY11","Final")
  CLEARBLOCK ALL;
ENDFIX

COPY DATA

The Copy Data template helps to walk the calc developer through the process of copying data from one slice of the database to another.

In the remainder of the wizard, you select the “Copy From” member and the “Copy To” member.  The calc script generated follows:

FIX (@RELATIVE("Measures" , 0),@RELATIVE("Periods" ,0),@RELATIVE("Product" , 0),@RELATIVE("Market" , 0),@RELATIVE("Years" , 0),"Budget")
DATACOPY "Working" TO "Final";
ENDFIX

AMOUNT-UNIT-RATE

The Amount-Unit-Rate template allows the developer to build a calc script to solve for either an amount, unit, or rate, basically whichever is missing.  I’ve added a couple of measures to my application to facilitate the demo.  Using the member selection wizard, I’ve selected “Sales” as my amount, “Cases” as my unit, and “Revenue per Case”  as my rate.  The script generated by the template follows:

"Sales"(
  IF ("Sales" == #missing and "Cases" != #missing and "Revenue per Case" != #missing)
    "Sales" = "Cases" * "Revenue per Case";
  ELSEIF ("Sales" != #missing and "Cases" == #missing and "Revenue per Case" != #missing)
    "Cases" = "Sales" / "Revenue per Case";
  ELSEIF ("Sales" != #missing and "Cases" != #missing and "Revenue per Case" == #missing)
    "Revenue per Case" = "Sales" / "Cases";
  ELSE
    "Sales" = "Cases" * "Revenue per Case";
  ENDIF
)

ALLOCATIONS

Two types of allocation templates are provided within Calc Manager.  The first template, Allocate Level to Level,  allows you to allocate from one level to another.   In my example with my Planning app, you would use this template to allocate marketing expenses  from product family to product using a driver like revenue.  This approach utilizes @ANCESTVAL to build the script.

The second template, Allocate Simple, allocates values based on a predefined relationship, such as Marketing->Market * Cases/Cases->Market.

Both templates walk the developer through the setup of the allocations, selecting members that are fixed throughout the process, offset members (if any), etc.

AGGREGATION

The aggregation template aids the developer to create a script to aggregate the application.  The first screen of the wizard, pictured below, allows you to select members for the FIX statement in the aggregation – here you would limit the calc to a particular version, scenario, or your non aggregating sparse dimension members.

The next screen prompts for dense dimensions to aggregate.  However, if dynamic calcs are properly utilized, this should not be necessary.

The third screen asks for sparse dimensions for the aggregation.  You should exclude any non aggregating sparse dimensions from this selection.

Next, you’re prompted for partial aggregations of dense dimensions.  Again – if dynamic calcs are used properly, this should not be an issue.

In the final screen of the wizard, the developer selects settings for the script…

The code generated by Calc Manager follows:

SET AGGMISSG ON;
SET FRMLBOTTOMUP ON;
SET CACHE HIGH;
FIX (@RELATIVE("Years" , 0),"Working","Budget")
CALC DIM ("Product");
CALC DIM ("Market");
ENDFIX

Please note that this code is not optimized.  In this example, I would use the following:

AGG (“Product”,”Market”);

The code as generated by Calc Manager will result in an extra pass through the database – the calc can be accomplished with a single pass.  Additionally, AGG can be used in place of CALC DIM if there are no formulas on the dimensions being calculated.  Generally speaking, stored formulas on sparse dimensions should be avoided due to performance issues.

SET Commands

The next template walks the user through setting various SET commands for the calc.  This is a fairly straightforward exercise.

EXPORT DATA

This is another straightforward template that helps create a data export calc script.  You need to define the fixed members for the export,  delimiter, #MISSING value, export type (flat file, relational), etc.

In the final part of this series, due for posting on August 13, we’ll walk through the creation of a ruleset.  If you have any questions before the next post, please leave a comment!

ORACLE HYPERION CALC MANAGER – Part 2 – Creating a Planning Rule

In Part 1 of this series we introduced Calc Manager, providing a general overview and explanation of some new terms.  In this post, we will walk through the development of a rule for Hyperion Planning using the graphical interface within Calc Manager.

Again, in order to access Calc Manager, log on to Hyperion Workspace.  Once in Workspace, the navigation path is:

Navigate->Administer->Calculation Manager.

Once in Calc Manager, you’ll land on the System View tab, which appears as follows:

 

For purposes of this demonstration, I have created an EPMA enabled Planning application from the Sample.Basic application that we all know and love.  When the Planning node is expanded, this is what I see:

First, to help illustrate functionality available in Calc Manager, I’m going to create a script component that contains my standard SET commands for the rule.  In order to create the script component, right click on “Scripts” and click on “New”.  Give your script a name and click on “OK”.  This will launch the Component Designer.

From here, you have two options.  If you know what your SET commands need to be, you’re free to type them in directly.  If you wish to be prompted through the process, click on the  button at the top left corner of the Component Designer window.  This will launch a window with all of the calc functions and SET commands.  The following shot displays the function selection interface for SET commands.

For my purposes, I’m going to directly type my SET commands into the Component Designer.  Once complete, save and validate.

Think of script components as an easy way to reuse code…SET commands, standard cube aggregations and the like. 

Once we have saved the script component with our SET commands, it’s time to develop our rule.  To begin, right click on “Rules” under the database node and select “New”.    Give your rule a name and click on “OK”.  This will launch you into the Rule Designer window.

In this example, I’ll create a rule that aggregates the cube, using a run time prompt for the Version dimension.

We can now begin to develop our rule.  First, we’ll select the script component for our SET commands that we developed earlier.  Simply drag this into the rule designer to the right of “Start”.  The Rule Designer window now looks like this:

Let’s take this opportunity to create our variable for the Version dimension run time prompt.  Go to the “Tools” menu and select “Variables”.  Once the Variable Navigator launches, expand the Planning, application, and database nodes.  I’m going to create a run time prompt variable for the Version dimension.  Right click on your rule name and select “New”.  Once I populate the fields on the “Replacement” tab, my screen looks like this:

When complete, save the variable.  Now, back to our rule…

We’re going to specify members for our “Fix” statement.  To do this, select “Member Range” in the New Objects portion of the Rule Palette.  

To add to the rule, drag and drop to the right of the SET command script.  My screen looks like this:

Next, we’ll populate the members for our Fix statement.  I’ll start with Measures.  For my rule, I want to select all of the level 0 measures.  Once I click on the Value field for the Measures dimension, an Actions box appears. 

I want to select a Function.  This invokes the function selection window that we observed earlier.  I want to select @LEVMBRS from the list, which will then prompt for the dimension and level number.

I select Measures from the drop down box and enter “0” for the level name.  I’m going to repeat this process for all of my dense and non aggregating sparse dimensions, with the exception of the Version dimension.  This will be handled via the run time prompt.  For the Version dimension, select “Variable” in the Actions box.  Change the Category selection to “Rule” and this is what we see.

Highlight the variable and click OK.  My member range box looks like this:

Now, we’ll develop the script component to aggregate the Product and Market dimensions.  I’m going to drag a script from the “New Objects” portion of the Rule Palette into my member range.  The graphical display looks like:

Again, I’m going to select a function (AGG in this case).  I then select Products and Market from the dimension selector. 

Now, save and validate.  To deploy the rule to Planning, select Quick Deploy:

Once deployed, the rule can be run from Planning.

In this post, we’ve provided a walk through on developing a new rule using the graphical designer.  More experienced developers can directly code the calc in script mode.  To convert to script mode, select “Edit” and “Script” from the menu. 

In the next post, due by July 31, we’ll explore templates and ruleset creation.  In the meantime, please leave a comment if you have any questions!

The Workflow Revolution – Changes to Financial Reporting

flatworldThomas Friedman first talked about how globalization impacts business life in The World is Flat.  In this book, he describes the ‘flattening of the world’ as the idea that workers from around the globe could collaborate and work across systems and wide spans of geography.  One specific part of this flattening is a change he refers to as the “quiet revolution in software, transmission protocols” that he calls “the ‘workflow revolution’ because of how it made everyone’s computer and software interoperable.”

I see this amazing transformation offered within financial software today, but many companies don’t completely understand the value or the concepts to implement this approach.

New financial systems today allow for the immediate submission of data.  The best practice applications of these systems allow for the validation, translation and commentary of this submission to be owned by the end users.

When I discuss the applied concept with clients, I speak of this ‘changing conversation.’  Before this workflow revolution, legal entities in remote parts of the globe would prepare financials and fax them, or teletype them, to a corporate office.   A process that was manual, slow and disconnected.

The end users owning the process changes the communication of the business.  The old typical conversation before might have been a submission of some financial data followed by a response that the data is incorrect or incomplete, and then a resubmission – all taking days to complete.  The process was also flawed in that it relied completely on the receiving member being proactive, and finding the errors.  Surprisingly, many companies still use this approach.

The technology exists to solve this problem and provide two major benefits.  First, products today make the validation systematic, hence reliable.  The end user knows immediately if the data is wrong, and can resolve the issues.  The system provide consistency and reliability that cannot be accomplished with people.  Second, the end users can be made aware of potential problems and begin researching proactively.  This proactive approach cuts days from the process and improves data quality.

Within my next blog posting, I will discuss many of the controls I am seeing in these systems like SAP’s BPC and Oracle’s HFM products, and how they improve data quality and speed of reporting.

 

Welcome to Ranzal & Associates’ Blog!

Ranzal specializes in Business Intelligence and Business Performance Management with a concentration in Oracle/Hyperion’s toolkit. Ranzal works closely with corporate executives, line-of-business management, end users, and information systems departments alike to address the business issues and challenges inherent in data gathering, management, and dissemination. Organizations from various industries have engaged Ranzal with outstanding results.

Topics of discussion: