Demystify the Balance Dimension in Profitability and Cost Management

Management Ledger models, whether Hyperion Profitability and Cost Management (HPCM) or Profitability and Cost Management Cloud Service (PCMCS), have been around for a few years, but I still receive emails asking for help with figuring out where the results are coming from. This request is often related to a lack of understanding of the Balance dimension. Here are some key pieces of information regarding this system dimension, how it works, how it should be used when defining allocations and integration jobs, and how to leverage it to troubleshoot your allocations.

Before we have a look at each member within this dimension, let’s go over some basic rules that govern the creation of an HPCM or PCMCS Management Ledger (ML) application:

  1. All HPCM or PCMCS ML applications must contain just one dimension named Balance
  2. Members and their properties cannot be edited or removed.
  3. You don’t need to import a file in order to load/setup the Balance dimension; members are created automatically when deploying an application for the first time.
  4. You can choose to rename the Balance dimension (translate it into another language, for example) when you first set up the application in PCMCS.

For the most part, the Balance dimension members are quite easy to follow and understand, but familiarity with usage guidelines helps to avoid issues during development and supports troubleshooting.

Demystifying the Balance Dimension in PCM - Image 1

  • Input — Used to store data input/pre-allocated data sets, whether these are pool or driver data sets. Data is generally loaded against this member in combination with the NoRule member. Input can be populated through custom calculations, but it is generally advised to keep it dedicated to valid data loads/input rather than for storing calculated or allocated results.
  • Adjustment In —Adjustment In can be used for manual adjustments to the Input data prior to running allocations. In this case, the Adjustment In data will be loaded against the NoRule member. Any manually submitted data on the Adjustment member against a Rule ID member may be eliminated during the subsequent data loads and calculations. Adjustment In can also be used during custom calculations to store intermediary values or calculated driver data.
  • Adjustment Out —Same usage as for Adjustment In, but with a negative data value.
  • Allocation In — This member will be populated against the Destination or Target intersection for the allocation rule.
  • Allocation Out —This member will be populated against the Source intersection of the allocation rule and the corresponding Rule ID member, or against a predefined “Offset” intersection that is custom defined for a given rule.
  • Allocation Offset Amount — Displays an amount that further reduces an Allocation In member, if one was used in addition to the Allocation Out. I have provided an example of how this member is populated and used in a lower section of this post.
  • Net Change — represents the total change for a given intersection, regardless of alternate offset actions.
  • Net Balance – sum of Input (initial data loaded) and any Net Changes made to the same intersection.
  • Remainder — Displays the difference between Allocation In and Allocation Out plus Allocation Offset Amount, if any.
  • Balance — The amount resulting when adjustments, allocations, and offsets are considered.

Rules assign funds to destinations based on the way you have defined the allocation logic (member selections, sequencing, concurrency, etc.). “Allocations in” and “allocations out” are being generated upon executing the calculations of the Profitability model. Each pair of adjustments and allocations (the “in” and the “out”) should result in a zero sum in order to balance the transaction. The Input member is affected by each adjustment and allocation. The difference between what was taken from Input and what remains at the end of an allocation will be accounted for in the Remainder.

The Remainder member is the source of your allocations, not the Net Balance member, as most would think.  Remainder takes into consideration alternate offsets and ensures we do not perform a double booking or a double allocation of the same data source, regardless of where the offset was applied.

To further explain the Balance dimension usage, I have used an example from the Bikes default application BksML30, which can be deployed into PCMCS through a few clicks.

The original application had only one adjustment Rule populating the Adjustment In member. I have copied that rule and reused it to demonstrate the same usage for the Adjustment Out member. Remember the adjustment out aggregation operator is still +, so if you want to offset data sets, you must use the appropriate signage for your data; in other words, negate the result either via a multiplication with -1 or by simply adding a – to the formula.

The new ruleset contents will look like this:

Demystifying the Balance Dimension in PCM - Image 2

Our initial data set is loaded on the Input/No Rule combination for the two accounts – Rent and Utilities – on the intersection with Corporate Entity.

The data adjustments are stored against Adjustment In and Adjustment Out.

Demystifying the Balance Dimension in PCM - Image 3

In order to further illustrate how to correctly follow the allocation process, I split the original Reassignment rule into 2 rules, each dedicated to its own account. I also updated the metadata by adding two new Account siblings to Rent and Utilities as offsets for each account.

Alternate offsets are simply intersections of members where you would like to store the offset data point, if it should differ from the source of the allocation.

The Remainder member demonstration is connected to the usage of alternate offsets, and before we go into the details of the numerical example, I would like to list out a few rules for setting up alternate offsets:

  • Alternate offsets are available for selection only in standard allocation rules. For Custom calculations, your Offset custom calc would have to be pointed to the appropriate “alternate” target.
  • All dimensions, including the ones predefined in the rule context, are repeated in the Offset screen as soon as you select “Alternate Offset Location.” You must select a single base level member for at least one dimension.
  • There is no “Same As Source” (SAS) option for offsets. The dimensions that must be offset on the Source intersections can be left blank in the Offset screen selections.
  • If each source member selection has its own offset, you will have to split the rule up into as many granular rules as needed in order to cover the individual offset selection. For example, if you have 6 accounts, each with its own offset account equivalent, you will have to create 6 standard allocation rules to create the individual offset selection for each account.

Going back to the numerical example and the usage of the Offset tab, in the update rule I have selected the below member intersections:

Demystifying the Balance Dimension in PCM - Image 4

The Source account was Rent, target is “Same as Source” (SAS), and the alternate offset account is FACOffset_Rent.

After the rules are executed, we will see the results below; focus on the Allocation Offset Amount member and the Allocation Out Member.

Even though the offset was applied to an alternate account for both Rent and Utilities, the allocation engine correctly identifies the Remainder of these two accounts as being 0.

  1. The first step behind the scenes is for the allocation to correctly distribute the data to the target intersections.
  2. The second step is to perform the offset on the intersection specified by the user, if different from the source intersection.
  3. The third step is to copy the Allocation Out value onto the Source Intersection members, on allocation Offset Amount member. This final step is performed via a custom calculation embedded in the PCMCS generated scripts which ensures there will be no double counting of pool data.

So even though we “moved” data from the Rent account, Corporate Entity, to other Entities, on the same target Account, the offset was performed on an alternate member. This allows us to create a report with Rent (Input), Rent (Allocation In) and FACOffset_Rent (Allocation out).

This is not a typical example of how alternate offsets are used from a functional standpoint, but it helps explain the mechanics behind the scenes. This alternate offset option is mostly used in cases where a Bill Out account and a Chargeback account will differ and allows users to trace which portion of a chargeback account is coming from different source accounts.

The final goal of an allocation is to generate a Remainder member with a value of 0. This ensures the total allocation of a pool data set, whether this was loaded or received from prior allocation steps. If the Remainder member has a positive value, then it is indicating that you have not fully utilized your pool data. If the Remainder member has a negative value, then you have overutilized your pool data which may be, in some cases, intentional.

Demystifying the Balance Dimension in PCM - Image 5

In situations where you will not give access to the PCMCS ML application to users who need to understand the various components of a data point flowing through the allocation steps, due to licensing costs or other considerations, the usage of alternate offsets throughout your allocation flow might be helpful.

When talking about reporting out of PCMCS ML, our clients always emphasize simplicity, and we often get requests to remove the Rule and Balance dimensions from final reporting solutions, to cancel the noise and give finance users solely the core information. In such situations, the usage of alternate offsets has proved beneficial as these finance users can still follow the flow and components of a cost without having to deal with the rule by rule detail. If further investigation is necessary, this can be pursued within the PCMCS ML model itself rather than in the external reporting solution.

If you need further help with figuring out the purpose and usage of the Balance dimension within PCMCS, email us at infosolutions@alithya.com. Our PCM Center of Excellence team is ready to share leading practices and industry-specific solutions that accelerate your ROI and expand the capabilities of your chosen profitability software.

A Cloud vs. On-Premise Comparison for Profitability: All You Need to Know

In a previous blog post, the history of Hyperion Profitability and Cost Management (HPCM) was discussed along with which modules made it to the Cloud. If you are after a more clear-cut comparison between Cloud and on-premise, the below table should fit the bill. Tables generally cannot provide all the needed context, yet they are, at times, the best starting point to understand the benefits and capabilities of one solution compared to another.  The below PCMCS vs. HPCM table is not exhaustive, and if you have questions on any of the items covered, email us and we will provide further details.

PCMCS 12-11 Image 1

Choosing between on-premise and Cloud depends on which factors are the most significant barring the overall licensing cost.

Allocations and data assignments cannot have “If” statements attached to them in the on-premise version of the software – a feature fundamental to supporting Tax transfer pricing capabilities.

The cross-dimension mapping is a functionality that is not available in HPCM. This mapping ensures the assignment of data sets to the same ID/name across multiple dimensions by using the “Same as Source but Different Dimension” option within PCMCS to support intercompany activities. This feature alone, or the lack of it, may significantly impact the design of an application and the overall complexity of allocation flows.

Features available in the Cloud but not yet released in on-premise solutions could tip the scale to favor the Cloud option when all other aspects surrounding a Cloud implementation no longer appear to be as pressing. Out-of-the-box content such as overnight backups, full application, and data restores that are at the business users’ fingertips – not to mention the reporting and dashboarding included in the Cloud version – are all differentiators of a product that enables business users to control their allocation process and methodology from its inception.

While there may be exceptions to the trend where on-premise solutions can have advantages (modules not available in the Cloud, for example) and, therefore, represent the best option at a given moment in time, the reality is that the future is being developed in the Cloud and for the Cloud, and at some point the shift will most likely no longer be an option, but a necessity.

If you need help making a decision with an existing implementation or you would like more details about HPCM vs. PCMCS to make a better informed decision, email us at infosolutions@alithya.com. Our PCM Center of Excellence team is ready to share leading practices and industry-specific solutions that accelerate the ROI and expand the capabilities of your chosen software.

Full Circle Planning, Cost Management, & Profitability in the Manufacturing Industry

This post corresponds to the webinar “Full Circle Planning, Cost Management & Profitability in the Manufacturing Industry.” You can access the recording here.

As we are all aware, today’s manufacturing industry faces multiple ongoing challenges, including:

  • Changing customer/consumer demands
  • Shrinking operating margins
  • Ever-changing compliance and regulatory pressures
  • Increasingly globalizing economy
  • Lowered availability and visibility of detailed information

Now more than ever, manufacturers’ focus is not just on growth, but, more specifically, on profitable growth.

 

Managing Profitable Growth

When it comes to profitable growth and insight into profitability, the first place to start is the consolidated P&L.

But while the P&L offers information on profitable growth, it does not help manage profitable growth. The financial P&L provides limited insight into costs, profits and their underlying drivers, from the perspective of their lines of business, products, customers, markets and channels. Cost bases are imperfect and are limited to legacy standard costing and unstructured cost extracts. Results lack a matching of costs and revenue to manage margins at the same strategic view as revenue.

 

The Need to Focus on Strategic P&Ls

To address and contend with these challenges, we recommend a greater focus on more strategic P&Ls for the manufacturing industry.

Strategic P&Ls provide insight into both direct costs and indirect costs.

  • Direct Costs include costs directly associated with:
    • The making of a product or delivery of a service
    • Parts for the product
    • Labor for Service Delivery
    • Costs directly attributed to the selling to a customer or client
    • Shipping and handling expenses
    • Customer processing expenses
  • Indirect Costs include costs that are not directly attributable to the making of a product, delivery of a service, or the selling to a customer:
    • Operating costs (e.g., Call Center, Distribution)
    • Selling costs (e.g., Sales & Marketing)
    • Investment costs (e.g., R&D, Initiatives)
    • G&A costs (e.g., IT, HR, Finance, Admin)
    • Finance charges for Cost of Capital Employed

Measurement of indirect costs in particular can be difficult.

 

What Would A Solution for the Manufacturing Industry Look Like?

With all of this in mind, it’s important to look at the big picture when determining what manufacturers can do to attain strategic P&Ls and overcome their challenges?

The ideal solution for the manufacturing industry would:

  • Design, support and evolve to an integrated financial process
  • Leverage operating metrics and key assumptions to:
    • Link business drivers behind financial performance
    • Modify drivers and assumptions to plan future performance and attain strategic P&Ls
    • Drive accountability to Lines of Business
  • Offer a consistent and transparent framework to support indirect cost attribution
  • Use integrated applications and tools to support and adapt to changing business processes
  • Provide robust reporting to business for transparency into causal factors

A true full-circle planning, costing and reporting solution that aligns and adapts to an integrated financial process includes the following:

  • Driver-based revenue planning and departmental expenses leveraging the actual financial data, operational metrics
  • Integrated costing capabilities that can allocate indirect expenses to lines of business by leveraging the same actuals, plans and drivers used in the planning process
  • Robust and real-time reporting to surface strategic P&Ls by Customer, Product and other Lines of Business

 

Some Solutions are Ineffective and Unsustainable

Our team at Ranzal has seen many manufacturers attempt to piece together a solution using various combinations of spreadsheets, ERP, custom and packaged applications.

Typically, spreadsheets are the most common ingredient given their flexibility and accessibility. But spreadsheets tend to be error-prone, highly manual/labor-intensive and prone also to risk regarding controls and governance. We’ve also seen customizing the ERP as a common solution-oriented approach, but this can be too expensive, overly IT-centric and can also be somewhat of a “black box.” And lastly, custom applications are slow to adapt, can promote high effort and cost and also function like a “black box.”

 

Oracle’s EPM as the Foundation for Full-Circle Planning

We recommend Oracle EPM’s packaged applications to be the foundation to configuring the right full-circle planning, costing and reporting solution that avoids the constraints and risks other avenues bring on.

The specific Oracle EPM offerings that support a full-circle planning, costing and reporting solution involve:

  • Planning & Budgeting Cloud Service (PBCS)
    • Best-in class solution for financial planning, budgeting and forecasting
    • Align top-down and bottom-up processes
    • Consistency of assumptions, calculations and methodologies
    • And many more features here
  • Profitability & Cost Management Cloud Service (PCMCS)
    • Computes Profitability for Units, Segments and Services
    • Pre-Built Framework for profitability modeling: Dimensions, Support for Multiple Cost Allocation methodologies, Validation reporting
    • Graphical Interactive Traceability Maps & Dashboards
    • Measures, Allocates and Assigns Cost and Revenues via User-defined Rules
    • And many more features here
  • Tightly integrated with the Oracle EPM Cloud
    • Consistent Administration with EPM Cloud Offerings
    • Shared Reporting Tools like Financial Reports & Smart View for Office
    • Proven Technology Stack

We believe a comprehensive solution focused on a “Technology Trio” of Integrated Business Analytics, or the convergence of: EPM, BI and BD solutions. Experience and results have shown us that this combination provides the tools and answers needed for improved business performance, increased innovation, better vision, and increased business value.

For more information or to request a demo, email us. Be sure to ask about our complimentary one-day Profitability and Cost Management assessment and how the newly-released Oracle Profitability and Cost Management Cloud Service (PCMCS) can help modernize your solution.

Process Simplification – Migrating from HPCM Standard Profitability to Management Ledger

With the introduction of Hyperion Profitability and Cost Management (HPCM), many organizations have recognized the power of this breakthrough solution to build sophisticated and powerful cost models. As such, HPCM has been successfully in use for several years, and in numerous cases, its use has been expanded.

Since the initial release of HPCM, Oracle has developed additional variations of HPCM to provide a full suite of capabilities in costing and profitability that can more specifically provide the right tool for the right job (RTRJ). These additional offerings include HPCM-Detailed Profitability and HPCM-Management Ledger, the latter of which is available either in the on premise version (HPCM-ML) or the cloud version – Profitability & Cost Management-Cloud Service (PCMCS).  The original solution of HPCM is now referred to as HPCM-Standard Profitability (HPCM-Standard).

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). This experience has prompted the notion that given the multiple offerings that are now available, it is worthwhile to evaluate the applicability of the new solutions to an organization’s existing use cases and consider making a change where appropriate.  In particular, Management Ledger offers benefits of flexibility and process simplification to warrant consideration of conversion of an HPCM Standard model to HPCM-ML or PCMCS.   This article discusses that process.

Background

Since HPCM’s introduction, it has been seen that there is not necessarily a one-size-fits-all solution for the set of needs in cost allocations and profitability. All allocations fundamentally follow the basic formula, A = S x F x D/Sum(D) where A = the target Allocated amount, S = Source amount, F = Factor, i.e. percent of source amount to be allocated, often 100%, D = Driver quantity, and Sum(D) = Sum of Driver quantities across target values.

However, this fundamental formula is where similarities end and distinctions begin. The original solution, HPCM-Standard, is well suited for cases where highly complex allocation models are utilized.  It is also well positioned where adherence to a highly-structured framework is sought, and it provides capability for highly detailed graphical tracing of allocations in the user interface.

Alternatively, Detailed Profitability, which can be deemed as the “heavy-lifter” of the offerings, requires that users define relatively simple allocation rules through a single allocation stage. However, in exchange for this concession, the solution can apply those rules across a wide range of dimensions and is able to do so at a very granular level of detail.  Also referred to as “Microcosting,” this solution leverages source pools and rates applied to a high volume of transactions or near-transactions.  Firms within industries such as consumer goods, transportation and distribution, retail banking, and healthcare are among those that may want to leverage this capability.  This solution enables capture of variation in cost at the shipment, order, transaction, or encounter level of detail, and then aggregates those values to higher levels such as product, service, or customer for analysis.

The third offering, Management Ledger, combines aspects of both of the other two solutions, such as some of the metadata granularity of Detailed Profitability, along with the logic complexity of Standard. This enables users to define custom models with fewer restrictions on the framework and fewer limits on the level of detail required for reporting.  Management Ledger is also flexible to accommodate future changes through its Rule Set/Rule sequencing construct.  Subsequent allocation logic changes can be of a substantial nature, potentially up to a near redesign.  Also, the rules building process itself is simplified in Management Ledger and it is one that aligns well with the intuition of finance users.  Further, Oracle’s current strategic direction is with Management Ledger, most notably seen in the recent release of PCMCS.

What is the benefit of conversion?

Management Ledger offers several key capabilities that can improve, streamline, or otherwise address existing challenges in a Standard Profitability environment.

  1. Management Ledger does not rely on a back-end staging table paradigm for data loading as does HPCM-Standard. Such reliance requires the availability of resources with the database skills required to support SQL interfaces to automate model processes, as well as to perform maintenance when metadata updates are made. For some user sites, the availability of these skills is limited.
  2. Management Ledger is an ASO application. It is not subject to the metadata restriction faced when deploying the HPCM-Standard calculation cube, which is BSO, and is subject to reaching the maximum number of potential blocks due to metadata duplication. Since Management Ledger does not duplicate the dimensions, it makes reporting easier for end-users and can eliminate the need for a “simplified” HPCM reporting application that is often created in an implementation of HPCM-Standard.
  3. Management Ledger does not require the use of pre-defined stages and an associated limit of three dimensions per stage as utilized in HPCM-Standard. This framework drove design decisions and influences future changes at certain user sites.
  4. Management Ledger is flexible to accommodate new methods of allocating data. The presence of a dimension in an application allows for its selection and filtering without the need for re-design.
  5. Management Ledger provides an interface that can be quickly learned by business users. Its set-up and maintenance simply requires the identification of sources, destinations, and the driver bases of allocations. Because it does not rely upon or require use of any specific methodology, existing Planning and HFM users can quickly learn the navigation and logic of Management Ledger. As shown below, the process for rules building in Management Ledger is straightforward.
    1-16-1

    Management Ledger Rules Building Interface

    1-16-2

  6. Management Ledger offers a multitude of standard reports for model documentation, rules validation, rule balance summaries of the results, and graphic traceability. PCMCS adds Business Intelligence visualizations such as scatter plots, cumulative profitability “whale curves,” and KPIs.

 

PCMCS Visuals

1-16-31-16-41-16-5

With all of these potential benefits, there are also offsetting considerations. Management Ledger may require more maintenance than HPCM-Standard due to a higher number of allocation rules, which is required in order to enable parallel processing.  Further, the graphical built-in traceability screen in Management may be considered by some as being less intuitive than the screen provided with HPCM-Standard.  Therefore, not in every case where Management Ledger is seen as a useful fit, will the advantages over Standard Profitability be sufficient justification to undertake the time and effort of a conversion.

What are the criteria for undertaking a Management Ledger Conversion?

To help evaluate whether it is worthwhile to pursue migrating a Standard Profitability model to Management Ledger, the following questions can be asked:

  1. Is there a major re-organization pending that is prompting a re-evaluation of the overall stages framework?
  2. Will there be future changes in which new allocation processes are added, such as moving beyond organizational allocations to ones that include other dimensions such as product or customer?
  3. Do changes in allocation methodologies occur often? Will business users be required to make these updates/changes and without the support of IT staff?
  4. Are new scenarios such as What-If or Ad-Hoc planned and is there an interest in testing different allocation methodologies versus the existing live production models?
  5. Are the theoretical limits associated with the Block Storage Outline (BSO) being approached?
  6. Is the process for updating the Standard Profitability staging tables considered to be time consuming and/or is the automation for populating the staging tables viewed as complex or poorly understood?
  7. Are there currently other Management Ledger models in the organization and is there a need or desire to achieve communization of platforms?
  8. Is there an objective to move applications to the Cloud?

 

What are the steps to migrate?

If the answer to any of the above questions is yes, then there is a potential opportunity to convert a Standard Profitability model to Management Ledger. In such a case, a prototype to test the concept should be created.  This prototype should be loaded with a sample of data and rules, typically for at least one POV, and calculated and validated.  Though each situation will have unique requirements, the overall steps are as follows:

Prototype Build -> Rules Creation -> Testing -> Validation -> Adjustment -> Migration

General Steps to Migrating to Management Ledger

  1. Migrate the Standard model to the same environment where the Management Ledger test will be built.
  2. Run a calculation of the Standard model to obtain a benchmark performance time.
  3. Create a new cube and database and copy the dimensions from the existing cube. A new Master application should be created and the dimensionality copied from the existing Standard Profitability Master application. This is so that the dimensionality from the calculation cube isn’t used, in order to avoid duplicate dimensions.
  4. Copy the dimensions from the old to the new cube. Make Cube Outline Updates.
    • Change the NoMember dimension member in each dimension to NoDimensionName.
    • Determine the dimension for the Drivers, usually the DataType or Account dimension.
    • Add the drivers from the Measures dimension to the Account or a DataType dimension.
    • Delete Measures and AllocationType dimensions (used with Standard model).
    • Add the Rule and Balance dimensions (used with Management Ledger models.
    • Add UDAs for potential rule filtering requirements.
    • Should both Source and Target allocation details be required for reporting, dimensions may need to be duplicated or split, such as in a case with Initial Cost Pool and Final Cost Pool.
  5. Create a new Management Ledger Profitability application that references the new cube.
  6. Deploy the Management Ledger Essbase Calculation engine.
  7. Choose and create a single POV to start.
  8. Import data from the existing cube to the new one utilizing the various methods available such as free form loading without rules, structured loading with rules, spreadsheet add-ins such as SmartView or other tools such as FDM/FDMEE. Note: For PCMCS, flat files of dimensions and data are employed.
  9. Document the allocation rules in a template.
  10. Enter the allocation rules through the ML user interface.
  11. Run Model Validation to check the new Rule Sets and Rules for errors before calculating.
  12. Launch a calculation. Start with running a single rule.
  13. Validate the Results. Progressively select more rules for successive calculation as rules are validated.
  14. Adjust methods iteratively.
  15. Create and update a report to demonstrate the validations to end-users as well as how the results are consumed.
  16. Migrate, once validation is complete including acceptability of both the results values and the processing times.

 

Some thoughts on building allocation rules

Upon having a Management Ledger outline, the allocation rules from Standard should be constructed through the user interface. There should be an association between the Stages in a Standard model versus the Rule Sets in a Management Ledger.  As a starting point, the Rule Set sequence flow should match the stages, though it may be found necessary to break the stages into multiple rule sets.

1-16.6.png

Once the rule sets are determined, the rules themselves should be documented in a template (Excel, Word, etc.) that is easy to manage and understand. The example that follows shows the dimensionality of the Source, Destination, Driver Basis, and Source Offset.

This template becomes part of the documentation of the prototype. Upon completion of the template, a user should build the rule sets and rules in the Management Ledger interface.  One of the key benefits of Management Ledger is to reference parent level values in the assignment rules.  This provides the ability to create many-to-many source-destination associations with few keystrokes.  This not only saves time in initial set-up, but also makes the entire process data driven such that when new dimension members such as new accounts, cost centers, products, or customers are added, the allocation rules automatically accommodate them without the need for editing or updating.  The ability to select at the parent level also reduces the need for automation routines of the types that are frequently created in Standard Profitability implementations, such as those used to update staging tables (Management Ledger does not have staging tables).

Users should start with referencing the highest-level parents to make the process as automated as possible. If performance becomes an issue, it may be necessary to reference mid or lower level parents.  Rules should be tested iteratively, i.e. run individually and then in groups to validate both the answers and to track processing time.

If calculation times exceed requirements or expectations, then start moving references to lower level parents. Avoid going to children as that will increase maintenance in the future.

Validation Concepts

Use the Rule Balancing Report to validate the cost flow and confirm that allocations in and out match expectations. Users should also generate a set of SmartView queries from the control HPCM-Standard Model and compare those to a set of SmartView queries from the HPCM-ML prototype.  Input and Stage amounts from HPCM-Standard should compare to Rule Set amounts in HPCM-ML, including checks that rule sets are using drivers correctly.  Calculation time and performance should also be tracked and benchmarked.

1-16-7

Conclusion

The advent of HPCM Management Ledger in both the on premise and cloud-based versions provides organizations with an opportunity to consider their existing solution and whether a migration to Management Ledger is warranted. Multiple considerations must be evaluated in this decision, and a prototype-based assessment is recommended as part of the process.  Edgewater Ranzal provides an Assessment service offering to assist organizations with this evaluation, as well as a subsequent implementation.  With over twenty experienced full-time consultants across the Americas and EMEA, and with more than twenty-five successful HPCM projects delivered since 2009, Edgewater Ranzal is the leading Oracle partner in delivering all versions of HPCM. Its comprehensive multi-product delivery approach can incorporate other tools such as Planning, DRM, FDMEE, & OBIEE.  These qualifications, along with its close relationship with Oracle Development, make Edgewater Ranzal the premier partner for client success.

 

Techniques for Creating, Loading, and Optimizing a Simple Essbase ASO Application

A couple of recent projects have required us to build an Essbase database to provide a subset of upstream system data for downstream consumer systems such as Hyperion Profitability and Cost Management (HPCM).  The process included dimension updates, data loads and custom calculations. Essabase Aggregate Storate Option (ASO) was the chosen Essbase technology because we were potentially dealing with large data volumes, relatively simple hierarchy structures, and only a small number of custom calculations that could be easily modeled in MDX within minimal performance impact.

The principle was that an overnight batch would be used to completely rebuild the ASO cube each night, including any metadata restructures that were necessary, followed by a full reload of data.

The high level process is as follows:

The starting point was to use a ‘stub’ application as a template for the metadata rebuild.  This is an ASO Essbase application with all dimension headers present, all POV dimensions present (Years, Periods, Scenarios etc), and all volatile hierarchies represented by the hierarchy headers only.  This ASO application serves as a “poor man’s MDM” which allows us to have application, dimension and hierarchy properties all pre-set.  The main advantage of the stub outline is that it creates a natural defragmentation of the target ASO application which improves query performance, and reduces dimension build times to the minimum. This is analogous to a relational database where you want to ‘truncate’ tables and/or compress, as opposed to deleting and reading all the time – there is a gradual growth.  A good tip is to defragment build dimensions in order from smallest to largest in terms of volumes.

A sample ‘Stub.otl’ outline looks something like the following. In this case, the stub outline is modeled after the new embedded Fusion G/L Essbase cube:

As can be seen, the volatile dimensions (Budget centre, Balancing Entity, Accounts, etc) are each populated with a single hierarchy header (e.g. BE_dummy) whereas the static dimensions (AccountingPeriod, Balance Amount etc) are complete, and will not be the subject of a dimension load in the MaxL.  Static dimensions which contain members with MDX member formula will persist (although the formula will not necessarily validate at this stage as they may depend on members that have not yet been rebuilt).

The first part of the batch process is to use this Stub outline to replace the outline in the ‘user’ ASO cube (i.e. the cube that will be restructured and loaded with data).  The MaxL will clear data & replace the .otl file in the user application with the .otl file from the ‘stub’ application

A simplified version of the MaxL is as follows (normally passwords would be encrypted):

This simply copies the Stub.otl file into the ‘user’ ASO cube database folder & names it with the target database name – it will be available as soon as the application is reloaded.

The next section in the MaxL would be a standard dimension build of those volatile dimensions – the primary consideration when building the hierarchies is that the ASO restrictions on hierarchies are met otherwise the outline will not verify.  This is not covered here – we assume that incoming master data is pre-validated to meet these requirements, but the summary of dimension rules for ASO is as follows:

  • ASO dimensions can contain hierarchies of 2 types – ‘Stored’ or ‘Dynamic’
  • A dimension must be tagged as Multiple Hierarchies Enabled’ or “Dynamic” if it contains two or more hierarchies
  • The first hierarchy in a dimension where Multiple Hierarchies enabled is specified  must be defined as a ‘Stored’ hierarchy
  • Stored Hierarchies are generally only additive as they only allow the + or ~ consolidation operators
  • Dynamic Hierarchies can contain any consolidation operators, and members can contain formulas.
  • For alternate hierarchies, where shared members may be required, Stored hierarchies can only contain one instance of a member (to avoid double counting), but subsequent Stored hierarchies can contain members previously defined in previous stored hierarchies

Once metadata has been loaded, the data load can be carried out.

Once this is complete, we have a fully loaded ASO cube, which we can retrieve data against using either SmartView or an Essbase report script (for example, when we are supplying filtered data to our downstream systems).

The example Smart View retrieve template below is a straightforward report with periods as columns and 550 rows of level 0 Budget Centres, with all other dimensions set as filters.

The Essbase application log shows that the above SmartView query took over 16s to execute.  This report layout may or may not be representative of real world queries / reports but the object of the exercise here is to speed this up for in-day usage.

ASO databases do not use calculation scripts to consolidate the data so the traditional BSO approach to consolidation cannot be used.  Instead, ASO will attempt to dynamically calculate upper level intersections, which, while resulting in much faster batch processing times, may result in longer than necessary retrieval times.

What we can do to improve this situation is use the ‘Query Tracking’ facility in ASO to capture the nature of queries run against the ASO cube, and build retrieval statistics against it. These statistics can then be used to build aggregation views tailored to retrieval patterns in the business.

This relies on us having some predefined definitions of the kinds of queries that are likely to be run – SmartView report templates, Web Analysis pages & Financial Reports definitions will all be suitable.

In this example, we use the above SmartView template as a basis for creating an Essbase Report script as follows:

This report mimics the SmartView template, and we use it during the overnight batch to capture the query characteristics using Query Tracking. One of the reasons to use report scripts is that if you use the query designer (or the Spreadsheet Retrieval Wizard if you are using a REALLY old version of the Excel Add In), it can save a report script output. MDX queries will have a similar affect.

The sequence of MaxL steps is as follows:

  • Switch on Query Tracking
  • Run one or more Essbase Report Script(s)
  • Run ‘execute aggregate process’ command to create aggregate views

The MaxL to accomplish this is as follows:

The ‘execute aggregate process’ command is issued with the ‘ based on query_data’ option to tell Essbase to use query patterns picked up by Query Tracking to build the aggregation views.  Essbase will build as many views as necessary until the ‘total_size’ limit is reached.  This limit may need tweaking so as to give the desired improvement in performance whilst also conserving disk space (which may get swallowed up with larger ASO cubes).  The particular example also runs in a matter of seconds, but the addition of more sample reports needs to be managed to ensure that the batch run time does not exceed its window.  It should be noted that one can process hierarchies without the query tracking, but there are restrictions on what alternate hierarchies get processed, and this is a very good technique when you are trying to improve performance on “alternate rollups”.

When this has been executed, users should see an improvement on query performance.

Our SmartView query was rerun, and the log file demonstrates the reduction in query time to less than 1 second :

This approach lends itself to situations where the ASO outline is likely to change frequently.  Changes in metadata mean that aggregation views created and saved in EAS cannot necessarily be reused – new level 0 members will not necessarily invalidate the aggregate views, but new upper level members, or restructured hierarchies definitely will invalidate these views. The rationale for this is because the ASO aggregation engine constructs multiple “jump” points based on the most recent level hierarchy  –  if I were going to oversimplify what was happening in a BSO world, imagine level zero stored, level one as dynamic calc, level two stored, level three as dynamic calc, and level four stored. In any instance, there would never be more than one level of dynamic calc. I don’t know if this is still the case, but this may be why ASO cubes seem to like symmetrical vs. ragged hierarchies a bit easier – it makes the derivation of what should be calculated vs. dynamic easier.