A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

Security configuration in TaskFlows (EPMA 11.1.2) – Cookbook style

TaskFlows in Enterprise Performance Management Architect (EPMA) can be used to sequence any number of EPMA operations such as dimension updates, application deployments, and data synchronisations.  They are also able to execute batch jobs and send emails, all of which makes them potentially useful for automation and integration of EPMA applications.

TaskFlows can be scheduled by a built-in scheduler in TaskFlow Management, but there is also the option to run task flows ‘interactively’ (i.e. on-demand). For one particular client application, this was an attractive approach to delivering a capability to kick-off processes in-day on an ad-hoc basis, but we needed to ensure that some kind of security could be applied to the different task flows so that ‘run’ permissions could be assigned to the correct people for the correct TaskFlow.

Our client system was based on 11.1.2, and the procedure below is based on 11.1.2.1.  I could not find much documentation around doing this on 11.1.2, and it appears that the controls around TaskFlow security have changed since 11.1.1.3.  Therefore I decided to set up a simple PoC as described below.

The summary approach is as follows:

  • Set up several simple TaskFlows
  • Set up security roles (Create the aggregated HSS roles)
  • Modify the Access Control to the TaskFlows
  • Set up several users & provision them to have access to different roles
  • Demonstrate the effect of these different levels of access

Set up some simple TaskFlows

TaskFlows can be created to run many different EPM processes such as cube deployment, data import/export, but for our example we are just going to get the TaskFlows to execute a simple batch file, which will write to an output file, which indicates that our TaskFlow has successfully run.

The administrator should have, as a minimum, the following roles provisioned in Shared Services, to be able to create TaskFlows:

TaskFlow administration is then accessed in EPMA, via ‘Navigate’ > ‘Application Library’, and then choose menu ‘Administration’  > ‘Manage Task Flows’, which will bring up the following screen:

From here, new TaskFlows can be created, and existing ones can be edited, deleted, scheduled and executed.

Below I have set up TaskFlow TF_1 to execute a batch job in Stage1.

This is done by selecting the processing tab & choosing ‘Hub’ from the ‘Application’ drop-down, ‘Execute’ from the ‘Action’ drop-down, and specifying the name of the batch file to execute.

(by default batch files are located in %HYPERION_HOME%\Common\Utilities, and output is routed to Oracle\Middleware\user_projects\domains\EPMSystem folder).

(Note, the example has Stage2 in it – TaskFlows can contain any number of Stages – this example has 2 stages purely as a result of experimentation – each stage simply executes a basic batch file to create an output file)

For the purposes of  testing access control, I created the following 4 TaskFlows (by using the ‘Save As’ control) :

Now that we have set up several TaskFlows, we will set up some users to demonstrate access control.  The first step is to create ‘aggregated roles’ in Shared Services (HSS) to allow each TaskFlow to have a different level of access, by associating it with different roles.

Set up security roles (Create the aggregated HSS roles)

The pre-defined HSS Administrator roles of ‘Manage TaskFlows’ & ‘Run TaskFlows’ will give default access to a user to run manage / run a new TaskFlow, until the access control of that TaskFlow is edited.

To satisfy our requirement to have differing levels of access we create different  ‘aggregated’ roles in HSS, with different access levels.  There are 2 access levels for TaskFlows

  • ‘Manage’ access allows the user to create, delete, schedule & run TaskFlows
  • ‘Run’ access allows the user to only run TaskFlows.

(Having no access, simply means that a user will only be able to ‘View’ TaskFlow status.)

For our example, we want to achieve the following access to TaskFlows TF_1 -> TF_4:

This requires us to create the following aggregated roles, which will eventually be associated with the specific TaskFlows :

Creation of aggregated  roles is simple.

1.   Log into HSS as admin, expand the ‘User Directories’, ‘Native Directories’ tree in the explorer and right-mouse-click on ‘Roles, and choose ‘New’:

2.   This brings up the ‘Create Role’ dialog – we enter a Role name, and choose ‘HUB-11.1.2’ as the product group (each product group has its own list of relevant ‘Available Roles’, but TaskFlow related roles exist in the ‘HUB…’ group):

3.   Select ‘Next’ and from the left hand list, select ‘Manage TaskFlows, and move it to the right hand list, and ‘Save’ :

4.   For the Roles which will have Run access, we simply choose the ‘Run TaskFlows’ role from the left-hand list instead.

5.   When we have finished creating all the new aggregated  roles we should have a list in HSS like this:

The next task is to edit our TaskFlows to utilise these new roles.

Modify the Access Control to the TaskFlows

As admin, log into EPMA & navigate to the ‘Manage TaskFlows’ screen.

Select the TaskFlow TF_1 & choose ‘Access Control’:

Each TaskFlow has the option to set the role allowed for ‘Manage’ access and the role allowed for ‘Run’ access:

The drop-down for the ‘Manage Permission Role’ displays all roles (pre-defined and aggregated) that have ‘Manage TaskFlow’ level access:

The drop-down for the ‘Execute Permission Role’ displays all roles (predefined and aggregated) that have ‘Run TaskFlow’ level access:

You can see the Aggregated roles that we created earlier are available.  The ‘Administrator…….Taskflow’ roles correspond to the predefined ‘Manage..’ and ‘Run..’ roles directly under the Administrator node in HSS:

So we can now associate our task flows with the different aggregated roles available, for both the ‘Manage’ & the ‘Run’ access level.

So for TaskFlow TF_1, we set the roles as follows:

The TaskFlows TF_2, 3 & 4 are configured as per the following table:

Now we need to provision our users to give then the correct level of access.

Set up several users & provision them to have access to different roles

Set up the example users (User1 -> User5) in HSS & provision them in the usual way as follows :

I have found that a user must be provisioned with ‘Create Integrations’ role as well as the ‘Manage’ or “Run’ roles, in order to get access to the ‘Manage TaskFlows’ screen.

(These are the roles required for each user in order to achieve the required access to the TaskFlows – it will be necessary to assign users to additional roles to achieve access rights to other parts of the product !)

Running a ‘Provisioning Report’ from HSS on the 5 users for Shared Services applications, we can see what their provisioning is:

Now we can login to EPMA for each of the 5 users and we can see the combined effect of the role creation, user provisioning & ‘Access  Control’ configuration:

User1:

User2:

User3:

User4:

User5:

Additional Information

The pre-defined Shared Services roles ‘Manage TaskFlows’ and ‘Run TaskFlows’ apply by default to any new TaskFlows that are created, because new TaskFlows have the access control roles of ‘Administrator Manage Taskflow’ and ‘Administrator Execute Taskflow’ set by default.  Any new TaskFlows with these default access control settings will only be available to users provisioned with the pre-defined Shared Services roles ‘Manage TaskFlows’ and ‘Run TaskFlows’.  Conversely, any users provisioned only with these pre-defined Shared Services roles, would have access only to TaskFlows with these default access control settings

The ‘Create Integrations’ role that is required to give any user access to the ‘Manage Taskflows’ screen can be embedded into the aggregated roles, rather than provision the role to the user separately:

I found that If a user has roles with ‘Manage TaskFlow’ access to TaskFlow A, and ‘Run TaskFlow’ to TaskFlow B, when the user logs in they appear to have ‘Manage TaskFlow’ access to both TaskFlow A & B. This means that that user can delete TaskFlows that they were not intended to have ‘Manage TaskFlow’ rights to , so it would be best to avoid this scenario.

Finally I found that logging out of one user session and logging back in as another user, without closing down the browser session,  had the effect that the second user’s privileges appeared to be the same as the first, even if their access was lower.  I worked around this by exiting all browser sessions & clearing the browser cache in between logins.  This could have been a local environment issue.

References

The Oracle library has the following guidance on TaskFlows and security relating to them:

http://docs.oracle.com/cd/E17236_01/epm.1112/hss_admin_1112200.pdf chapter 6 – Managing Roles, chapter 8 -Managing TaskFlows

http://docs.oracle.com/cd/E17236_01/epm.1112/epma_admin.pdf part IV, Using Task Automation

ORACLE HYPERION CALC MANAGER – Part 1

With the continued investment in the Hyperion tool set by Oracle, there was a desire to centralize the development of calculations for HFM, Essbase, and Planning.  As a result of this, Oracle Hyperion Calculation Manager was born.  Calc Manager is a powerful tool for developing and administering rules for Planning and Essbase.   An intuitive graphical interface is available to help in the development process, helping to expedite movement through the learning curve for people just beginning to dip their toes into the world of Oracle Hyperion Planning and Oracle Essbase.

Over the course of several posts this summer, I’ll explore Calc Manager functionality from the Essbase and Planning points of view.  For EPMA-enabled Planning applications, use of Calc Manager is required.   With version 11.1.1.3, Calc Manager can be used with Classic Planning apps as well.  However, the focus of my blog posts will be EPMA-enabled apps, as Classic Planning rides off into the sunset.

Calc Manager, a component of EPM Architect, is integrated into EPM Workspace, the standard entry point for many Hyperion applications.  In order to access Calc Manager, log into Workspace, and select Nagivate->Administer->Calculation Manager (see screen shot below for navigation path).  However, before we get too far into actually navigating the tool, we’ll need to get comfortable with the terminology within Calc Manager.

There are three types of objects within Calc Manager:  components, rules, and rulesets.   Components are smaller pieces of a larger rule.  Things like SET commands, FIX statements, formulas, etc. are examples of components.  I’ll explore this in much greater detail in a future post, but think of a standard types of SET commands that you use in all of your scripts – this can be saved separately as a script component and pulled into a new rule very easily.  Included below is a shot of the Component Designer with a sample of some standard set commands.

Essentially, rules are the finished calc script, similar to Business Rules in the past.  Rules are used for modeling/allocations/aggregations and the like.  Rules can be built using system templates.  Oracle has provided standardized templates for tasks such as clearing, copying, allocating, aggregating, and exporting data.   Again, these templates will be explored in additional detail in a future post.

Rulesets are similar to Business Rule Sequences under Hyperion Business Rules.  Rulesets can be used to launch rules sequentially or simultaneously depending on your logic requirements.

Now that we’ve covered the basic terminology related to Calc Manager, in my next post, which should be online by July 4, we’ll walk you through creating a rule for an EPMA enabled Planning app.  In the meantime, if you have any questions, leave a comment!

Special uses for Life Cycle Management (LCM)

In my previous post, I showed how to use LCM to back up or copy an entire planning application environment.  Here I’ll expand on that subject a bit by showing some other uses you may find handy.  This is by no means meant to be an exhaustive collection – just a few suggestions you may find useful and which may provoke ideas for other uses.

Copy single dimension from one app to another

This can be done for any dimension, including the standard planning dimensions.  Here, to expand on the subject we are also going to export from the “Organization” dim in one planning app & import to the “Entity” dim in another.

Select the artifacts to export (no harm in copying everything).

Click thru the next screen to this one.

Since we need to change the dimension name, we must export to files, not directly to the other app.

Then click thru the remaining screens to execute the migration.

After the export finishes, go to the \Hyperion\Common\Import_export directory. Under the Username@Directory folder find the files you exported.

In the “info” directory, edit “listing.xml” changing all instances of “Organization” to “Entity”.

Now find the XML file for the dimension to be migrated with name change.

Rename to the target dimension name.

Now edit the file to change “Organization” to “Entity”.

In Shared Services->Application Groups->File System, open the extract and select the (newly renamed) Entity dimension.

Define Migration…

…and click thru the remaining screens to execute the migration.

Lights-out Operation

In Shared Services select the artifacts to be backed up and define migration.

We need to back it up to files so type in a folder name…

…and click thru the remaining screens until you get here.

Now, instead of clicking the Execute button, click “Save Migration Definition.”

You will get this screen…

…click “Save.”

Shared Services wants to save “MigrationDefinition.xml” where you tell it to.

You can name the file any name you want (I suggest using naming conventions to differentiate the operation being saved) and anywhere you want.

After saving the file you will get this…

…click “Close” and the backup definition will be saved.

Now look in the Automation folder where the xml file was saved.

The file has everything Shared Services needs to run the backup from the command line utility except the USERID and PASSWORD.

Edit in TextPad or other text editor and type in a Userid and password.

After running the job the password is automatically encrypted.

The job is run from an Oracle supplied process, “utility.bat.”

…and you pass the path information to the migration definition file you created above.”

You should channel the output to a log file so you will have a record of success or failure.  The following message is an excerpt from that log which, in turn, lists the detailed log location & name and whether the process was a success or failure and it will also tell exactly where any failure occurred in the process.

I hope I’ve shown you enough to get you started using LCM.  It can certainly be a valuable tool, whether you want to do one-time tasks or perform lights-out operations such as regular backups.  The important thing to remember is to test it and see what, if any, problems you will have and either fix those or work around them.

Using Oracle’s Hyperion® Life Cycle Management

What is LCM?

LCM (Life Cycle Management) is a tool which can be used to migrate Hyperion applications, cubes, repositories, or artifacts across product environments and operating systems. It is accessed through the Shared Services Console.

Does it work?

After using LCM at a few clients I think the answer is a definite YES, but there needs to be a realistic setting of expectations:  Yes, LCM has some very good and handy uses; but NO, it is not necessarily going to be a painless, simple answer to your migration and/or backup needs.

What can I do with it?

You can use it for migrations:

  • One environment to another
  • One app to another (same SS environment)
  • Selected dimensions or other artifacts

And for backups/restores, including keeping two separate environments synchronized:

  • Selected artifacts
  • Lights-out

Products which can be migrated are:

  • Shared Services
  • Essbase
  • Planning
  • Reporting
  • HFM
  • The dimensions housed in EPMA

This blog is going to concentrate on using LCM for planning application migrations although, as you can see from the list above, it can also be used for other products as well.

First I’ll show how a migration is done, using screen shots, to give a detailed look.  Then I’ll point out things to look out for including things which will cause the migration to fail — with work-arounds where possible.

To migrate an entire Planning application, you will need to copy (4) areas:

  1. Shared Services
  2. Essbase (For Planning, only need the Essbase Global Variables.  All App/DB specific variables are migrated with the Planning Application)
  3. Planning Application
  4. Reporting and Analysis (if applicable)

The order in which you export these is not important but when doing the import, the order is very important.

Some important considerations:

  • Target app can have different name from source
  • Source and destination plan types must match
    • Can be changed by editing the files
    • Target plan types must be in same order as source
  • Start year must be the same
    • Number of years doesn’t need to match
  • Base time period must be the same
  • Target app’s Currency settings must match Source
  • Standard Dimension names must match
    • Can be changed by editing the files

When exporting any application it is advisable to just export everything.  If necessary you can be selective on the import side.

Start the process by opening the Shared Services console and go to the Application Groups –>Application (in this case – Shared Services under Foundation).

In the lower part of the screen, click “Select All” and then “Define Migration”

Now go through the screens:

Leave each field with an * and Choose “Next”

Type in a file name for the export.  It is advisable that you use a naming convention for this since you will end up with (possibly multiple) files for each application.

Review the destination options & click “Next.”

Finally, review the Migration summary and click “Execute Migration.”

NOTE:  If this process is going to be re-run in a lights-out environment you should instead choose the “Save Migration Definition” button.  I’ll discuss this more fully later on.

You will get this information screen.  Click Launch Migration Status Report to actually see the migration progress.

As long as the migration is running you will get a status of In Progress

Click Refresh to keep checking status (if desired) until you get a status of Completed or Failed.

All of the other applications can be exported this same way, each with slightly different screen sequences but generally the same process.

The primary differences will be for Planning and Essbase where, if there are other applications in the same Shared Services environment, they will be offered as possible targets for the export, in addition to the File System.  Selecting one of these will cause a direct migration from the source application to the selected target application.

After the exports are finished the LCM export files can be copied to the target server environment, if needed.  These export files can be found on the Shared Services server under \Hyperion\common\import_export\username@directory.

Copy the entire directory (in this example, Essadmin@Native Directory) to the Hyperion\common\import_export directory on the target server.

The import side is where things are more likely to be tricky.  Here you will reverse the process, selecting the export files in proper order (Shared Services, Essbase, Planning & Reporting) and importing them to whatever target is appropriate.

Start the process by logging in to the Shared Services console as the same username you used in the export process.  Under Application Groups–>File System, find the appropriate export files and click “Define Migration.”

Click through the screens, including the SS screen selecting the target application to import to.

On the destination option screen select Create/Update and increase the Max errors if desired (default = 100)…

…and run the migration.

For the Planning import select all to begin.

Click through the screens and select the planning application to import to.

And click through the remaining screens to execute the migration.

The Reporting migration is similar.  Select all the artifacts you want to import.

And go through the remaining screens to execute the migration.

In many cases, especially where you are keeping two identical environments in sync, these migrations should go smoothly and complete without error.  However, at other times, especially when doing an initial migration or one where the security will be much different from one to another, you may have to make several passes at the migration.  When even one item fails to migrate successfully, LCM will send back a status of “Failed”.  Click on that link in the status report and LCM will tell you what items failed to migrate.  All other items will usually have migrated successfully.   You will then have to figure out why the item failed and either fix the problem, work around the problem or ignore it and migrate the item another way.

Here are some things I’ve found which will cause you problems in using LCM:

  • In exporting a planning application with many substitution variables, the EXPORT failed – refusing to export all of the variables.  This was worked around by exporting only the variables and then exporting everything except the variables.
  • OR, you can play with the group count/size settings as well as report and log files location within the migration.properties file.
  • Default settings usually are:
  • grouping.size=100[mb]
  • grouping.size_unknown_artifact_count=10000
  • Using “All Locations” in HBR will cause failure for those forms.
  • Essbase server names—if not same in source & target, you will have to modify the import files for target name.
  • Report Name Length is limited to 131 characters less folder name.
  • Dim members “Marked for Delete” won’t migrate.  You will have to delete them using a SQL query if you want them migrated.
  • Form folders may get re-ordered on migration.  You can work around this by manually adding the folders to the target application in the proper order.  LCM will not reorder existing folders.
  • Doesn’t support parentheses ( ) in form names.  You won’t get an error indication in the export/import – the forms just won’t be there in the imported app.  You’ll have to rename the forms to get them to come over.
  • Member formulas need to be in planning – if just in Essbase they don’t come over.  If this is a one-time migration you can use the EAS migration utility to bring the outline over after the LCM migration.
  • You must manually delete Shared Services groups in the target app if you deleted them in the source app (or they will remain).
  • Reports – you must manually update the data source in the target.
  • Members don’t come over with certain special characters.
  • Doesn’t support Clusters; must use the outline as HBR location.
  • Global Variables with limits in their definition don’t work.

Well, now you should be able to use LCM and judge for yourself whether it is right for your application.  In another BLOG I’ll show how to run LCM in a lights-out mode and also how to do some modifications to the export files so you can do things like sharing dimension detail between planning applications.

Enjoy!

Why not EPMA…who needs DRM?

Should I use EPMA or DRM?

Should I use EPMA or DRM?

Over the past several months, and quite possibly the past year or two, there have been numerous discussions regarding the need for a separate master data management (MDM) tool such as Hyperion / Oracle Data Relationship Management (DRM) to manage Hyperion metadata outside of the Enterprise Performance Management Architect (EPMA) tool that comes with Hyperion System 9 and Oracle Fusion 11.

Recently at a users’ conference, I heard comments like “EPMA is DRM ‘Light’” and “EPMA is DRM with a Web interface”.

Hyperion, and obviously now Oracle, has invested deeply in EPMA and it is difficult to identify how and where it might differ from the DRM product. Oracle has even used portions of the DRM base code and underlying architecture in EPMA and when looking at vapor-ware demos, you might draw similar conclusions to those quotes above. In reality, EPMA, in its current state, is a pumped up version of the old Hyperion HUB as it relates to metadata management. Granted, EPMA has updated the user interface leveraging the glyphs (icons) and nomenclature from DRM while completely missing the intellectual aptitude that a master data tool provides.

Below are the key uses that were provided in a recent Oracle presentation as well shows the difference between EPMA and DRM:
EPMA
  • Unifies and aligns application administration processes across the Hyperion EPM system
  • Imports and shares business dimensions in the Dimension Library
  • Builds, validates, and deploys applications in the Application Library
  • Designs and maintains business rules in Calculation Manager
  • Loads and synchronizes transaction data into and between EPM applications

DRM

  • Manages change of business master data across enterprise applications
  • Consolidates and rationalizes structures across source systems
  • Conforms dimensions and validate integrity of attributes and relationships
  • Synchronizes alternate business views with corporate hierarchies
  • Key Features include:

           i.      Versioning and Modeling
           ii.     Custom rules and validation
           iii.    Configurable exports
           iv.    Granular security
           v.     Change tracking

 *Oracle Hyperion Data Relationship Management, Fusion Edition 11.1.1– Robin Peel