Implementing Zero Based Budgeting: Setting Up Your Environment

The previous post – Implementing Zero-Based Budgeting: The Requirements – outlined two key components of a successful zero-based budgeting program:  a culture change and a centralized system. We recommended creating a centralized system with Oracle Planning and Budgeting Cloud Service (PBCS)/Enterprise Plainning and Budgeting Cloud Service (EPBCS) because of the many advantages it provides such as an environment with data depth.

Even with a zero-based budgeting blueprint, many companies are still hesitant to go “all in” thinking that a zero-based budgeting program implementation requires too much time and resources. The introduction of Cloud services such as Oracle PBCS/EPBCS makes the implementation of a centralized financial system easier than ever, greatly reducing the barrier to entry.

This final post in this series shares the power of a PBCS/EPBCS environment to achieve the greatest success with a newly implemented zero-based budgeting program.

How Can PBCS/EPBCS Environments Enhance the ZBB Experience?

There are four key ways to gain the most from a PBCS or EPBCS environment, including the setup of targets and accountability metrics that offer more meaningful data and greater transparency when making budgeting decisions.

Clients are often given target settings goals in management meetings or over the phone, but we demonstrate for them how to integrate this into their budgeting systems. On numerous occasions, Alithya has been contracted to implement target settings where leadership sets growth targets and the systems flows down the revenue by service, product line, etc. In turn, analysts match the underlying details.

Not surprisingly, this is a common request because target setting has been a long-time tradition during the budget process. By setting up this target setting process in PBCS\EPBCS, an off-line process is instead online and is molded with the overall budgeting system process.  Combining that with the zero-based budgeting mantra allows targets to be set and provides analysts with their needed baseline.  Moreover, analysis can be done on departments that take the typical “reduce expenses by 10% approach” to archive the target number instead of the more insightful zero-based budget journey.  Yes, target setting in a centralized system is easier, but the benefit of a centralized system is the ability to see how teams react to the new target.  Did they take the traditional “reduce budget percentages to fit the numbers,” or did they look at their budget as a whole and analyze each line item and question the numbers organically?

After targets are set and the budget is approved, we look at the said cost saving come to fruition.  A centralized system allows capital projects or initiatives to be tracked to help systematically measure the expenditures of cost savings activities found during the zero-based budget discovery. This provides a clear picture of what each department is doing and holds them more accountable for project decisions. It is an achievement to complete a zero-based budget “diet,” but holding teams accountable brings them to the next level of the zero-based budget “lifestyle.”

In essence, this new budgeting environment provides better insight into data – insight that ultimately allows savings to be found more effectively. For example, if you want to see the cost of direct materials, this centralized system can be set up to capture the costs in order to analyze and keep track of the different KPIs that reduce or increase overall costs.

Another example of how this works is by segmenting down employee costs such as travel. Instead of having a run rate of 10% of direct labor or travel costs, determine what job or tasks required that travel and use this KPI to negotiate travel expenses to further drive down costs.  Essentially, use PBCS/EPBCS as a tool to capture KPIs (e.g. travel costs by job) and determine the best use of travel dollars and – more importantly – negotiate with vendors on key travel.

Lastly, a budgeting environment provides clarity to help teams make better informed decisions about future initiatives. With the ability to see all of the underlying data points in a single location, it is possible to identify past sales and marketing campaigns and expenditures that led to profitable customers. Therefore, zero-based budgeting teams that took the initiative to determine the best sales and marketing costs to benefit analysis from the ground up are able to dedicate more resources (e.g. dollars, people, etc.) to winning strategies.  This is in contrast to the traditional budgeting approach of “10% rate of marketing spend year-of-year” that often masks the winning and more importantly losing marketing initiatives. Moreover, such planning and availability of different data points helps draw key inferences that allow sales and marketing teams to be more successful.

Summary 

Utilizing a Cloud service such as Oracle PBCS/EPBCS makes it easier for companies to implement a centralized system and achieve success with a zero-based budgeting program. PBCS/EPBCS environments can and should be set up in a way that enhances the zero-based budgeting experience. This is achieved by integrating target setting goals and establishing accountability metrics that allow a deeper dive into budget data while providing greater transparency to make better informed decisions.

To learn more about zero-based budgeting best practices and to get professional help with your Oracle PBCS/EPBCS environments, feel free to contact our team of experts.

Implementing Zero-Based Budgeting: Benefits, Myths, and Goals

If you are in the finance world, then you probably have heard of zero-based budgeting. Investopedia defines zero-based budgeting as “a method of budgeting in which all expenses must be justified for each new period. The process…starts from a “zero base,” and every function within an organization is analyzed for its needs and costs.”

There are many reasons that financial professionals decide to use zero-based budgeting. For one thing, it goes hand-in-hand with a centralized system where information can be shared – something at which Excel spreadsheets are terrible. Furthermore, developing a centralized system enables you to scale to your needs as your company grows. Lastly, it enables financial analysts to spend more of their work week analyzing data instead of curating a financial system and worrying if the numbers match.

At Alithya, we have found with our past clients that a successful zero-based budgeting implementation resolves numerous problems. The two main things clients hope to achieve is growth across multiple business units and developing sustained cost reduction. With zero-based budgeting, you can earn long-term savings that can directly translate into sustainable growth.

Earning Long-Term Cost Savings

Zero-based budgeting becomes a daily exercise in cost savings for your financial teams. One method in achieving cost savings is renegotiating costs. For example, instead of taking the run-rate of 3% from last year’s numbers, perhaps you can contact your vendors to bargain for a better deal or switch to a different vendor with a more competitive price. Or how about having your analysts ask the IT department why it costs $38.03 per phone? What makes up that entire $38.08? Don’t assume that there aren’t any negotiable components of a cost.

The reason zero-based budgeting is so effective at long-term savings is that it is not a one-off fix. Many teams tend to implement one-off fixes, and then find that those fixes do not provide sustainable cost savings. A common example is offshoring your call center which might get you an immediate win in the cost column. However, this strategy typically reduces customer service quality while also limiting your ability to evolve with your business as it grows.

When enacting this type of program, you will analyze the costs of your business at every level. This may seem tedious, but what you will find is a clearer understanding of where your money is going. This can mean acquiring a greater understanding of contract labor costs as well as improving purchasing and procurement procedures, just to name a few. Moreover, when properly implemented, zero-based budgeting can reduce SG&A costs by 10 to 25 percent, often within as little as six months,” according to McKinsey & Company.

Debunking Myths Surrounding Zero-Based Budgeting

There are many myths surrounding zero-based budgeting that have sadly created an artificial barrier that CFOs and their teams do not want to cross. Many financial professionals think that it means cutting the budget down to the bare bones, but rather, a zero-based budgeting program analyzes costs from the top-down. Moreover, it is the CFOs’ duty to outline cost-cut targets so that their team’s efforts are focused.

Another misconception is that zero-based budgeting only helps with cutting the costs of SG&A. Actually, it can do much more, such as breaking down the Cost of Goods Sold (COGS) and help teams make investment choices on the capital expenditure with the greatest ROI.

Just because your business is not in decline or stagnating doesn’t mean that you can’t adopt a zero-based budgeting program. If you are already achieving growth, you can use this type of budgeting method to keep the overall business leaner so that you can provide more runway for growing business units.

Do you really start from zero? This is a common question that we are asked, and many people think because of its name that you do always start from zero. Technically, this is true, but this is the core component that drives the cost management culture change that will be introduced in the next post in this series.

However, not all things have to start from zero. At Alithya, we have been through many implementations where parts of the P&L are driver-based or zero-based. This can be achieved with a detailed, structured, and interactive system (like Oracle PBCS/EPBCS) that gives you real-time feedback.

How Does Oracle PBCS and EPBCS Help Achieve ZBB goals?

The main feature you acquire when you implement an Oracle PBCS or EPBCS system with your zero-based budgeting program is deeper analytics. This data enables you to dig into the “why and how” of your P&L.

For example, you could pose the question what driver did they use? Did they just simply take last year’s actuals and add 3%? Did they take a cost-per-head and budget it manually, or did they take the easy way out? All are important questions that force finance teams to be more accountable when it comes to everyday decisions.

Recapping the Benefits of ZBB

By implementing a zero-based budgeting program with a centralized system, you can hold your analysts more accountable to cost figures while making them own up to how the costs are managed. It allows you to recognize any unwanted costs that can be diverted into certain growth areas as well as breed a culture of cost reduction and visibility. The latter requires that you to start a culture change within your team. It is an essential part of having success with a zero-based budgeting program which is why we will cover it in greater detail in the next post.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.

 

Process Simplification – Migrating from HPCM Standard Profitability to Management Ledger

With the introduction of Hyperion Profitability and Cost Management (HPCM), many organizations have recognized the power of this breakthrough solution to build sophisticated and powerful cost models. As such, HPCM has been successfully in use for several years, and in numerous cases, its use has been expanded.

Since the initial release of HPCM, Oracle has developed additional variations of HPCM to provide a full suite of capabilities in costing and profitability that can more specifically provide the right tool for the right job (RTRJ). These additional offerings include HPCM-Detailed Profitability and HPCM-Management Ledger, the latter of which is available either in the on premise version (HPCM-ML) or the cloud version – Profitability & Cost Management-Cloud Service (PCMCS).  The original solution of HPCM is now referred to as HPCM-Standard Profitability (HPCM-Standard).

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). This experience has prompted the notion that given the multiple offerings that are now available, it is worthwhile to evaluate the applicability of the new solutions to an organization’s existing use cases and consider making a change where appropriate.  In particular, Management Ledger offers benefits of flexibility and process simplification to warrant consideration of conversion of an HPCM Standard model to HPCM-ML or PCMCS.   This article discusses that process.

Background

Since HPCM’s introduction, it has been seen that there is not necessarily a one-size-fits-all solution for the set of needs in cost allocations and profitability. All allocations fundamentally follow the basic formula, A = S x F x D/Sum(D) where A = the target Allocated amount, S = Source amount, F = Factor, i.e. percent of source amount to be allocated, often 100%, D = Driver quantity, and Sum(D) = Sum of Driver quantities across target values.

However, this fundamental formula is where similarities end and distinctions begin. The original solution, HPCM-Standard, is well suited for cases where highly complex allocation models are utilized.  It is also well positioned where adherence to a highly-structured framework is sought, and it provides capability for highly detailed graphical tracing of allocations in the user interface.

Alternatively, Detailed Profitability, which can be deemed as the “heavy-lifter” of the offerings, requires that users define relatively simple allocation rules through a single allocation stage. However, in exchange for this concession, the solution can apply those rules across a wide range of dimensions and is able to do so at a very granular level of detail.  Also referred to as “Microcosting,” this solution leverages source pools and rates applied to a high volume of transactions or near-transactions.  Firms within industries such as consumer goods, transportation and distribution, retail banking, and healthcare are among those that may want to leverage this capability.  This solution enables capture of variation in cost at the shipment, order, transaction, or encounter level of detail, and then aggregates those values to higher levels such as product, service, or customer for analysis.

The third offering, Management Ledger, combines aspects of both of the other two solutions, such as some of the metadata granularity of Detailed Profitability, along with the logic complexity of Standard. This enables users to define custom models with fewer restrictions on the framework and fewer limits on the level of detail required for reporting.  Management Ledger is also flexible to accommodate future changes through its Rule Set/Rule sequencing construct.  Subsequent allocation logic changes can be of a substantial nature, potentially up to a near redesign.  Also, the rules building process itself is simplified in Management Ledger and it is one that aligns well with the intuition of finance users.  Further, Oracle’s current strategic direction is with Management Ledger, most notably seen in the recent release of PCMCS.

What is the benefit of conversion?

Management Ledger offers several key capabilities that can improve, streamline, or otherwise address existing challenges in a Standard Profitability environment.

  1. Management Ledger does not rely on a back-end staging table paradigm for data loading as does HPCM-Standard. Such reliance requires the availability of resources with the database skills required to support SQL interfaces to automate model processes, as well as to perform maintenance when metadata updates are made. For some user sites, the availability of these skills is limited.
  2. Management Ledger is an ASO application. It is not subject to the metadata restriction faced when deploying the HPCM-Standard calculation cube, which is BSO, and is subject to reaching the maximum number of potential blocks due to metadata duplication. Since Management Ledger does not duplicate the dimensions, it makes reporting easier for end-users and can eliminate the need for a “simplified” HPCM reporting application that is often created in an implementation of HPCM-Standard.
  3. Management Ledger does not require the use of pre-defined stages and an associated limit of three dimensions per stage as utilized in HPCM-Standard. This framework drove design decisions and influences future changes at certain user sites.
  4. Management Ledger is flexible to accommodate new methods of allocating data. The presence of a dimension in an application allows for its selection and filtering without the need for re-design.
  5. Management Ledger provides an interface that can be quickly learned by business users. Its set-up and maintenance simply requires the identification of sources, destinations, and the driver bases of allocations. Because it does not rely upon or require use of any specific methodology, existing Planning and HFM users can quickly learn the navigation and logic of Management Ledger. As shown below, the process for rules building in Management Ledger is straightforward.
    1-16-1

    Management Ledger Rules Building Interface

    1-16-2

  6. Management Ledger offers a multitude of standard reports for model documentation, rules validation, rule balance summaries of the results, and graphic traceability. PCMCS adds Business Intelligence visualizations such as scatter plots, cumulative profitability “whale curves,” and KPIs.

 

PCMCS Visuals

1-16-31-16-41-16-5

With all of these potential benefits, there are also offsetting considerations. Management Ledger may require more maintenance than HPCM-Standard due to a higher number of allocation rules, which is required in order to enable parallel processing.  Further, the graphical built-in traceability screen in Management may be considered by some as being less intuitive than the screen provided with HPCM-Standard.  Therefore, not in every case where Management Ledger is seen as a useful fit, will the advantages over Standard Profitability be sufficient justification to undertake the time and effort of a conversion.

What are the criteria for undertaking a Management Ledger Conversion?

To help evaluate whether it is worthwhile to pursue migrating a Standard Profitability model to Management Ledger, the following questions can be asked:

  1. Is there a major re-organization pending that is prompting a re-evaluation of the overall stages framework?
  2. Will there be future changes in which new allocation processes are added, such as moving beyond organizational allocations to ones that include other dimensions such as product or customer?
  3. Do changes in allocation methodologies occur often? Will business users be required to make these updates/changes and without the support of IT staff?
  4. Are new scenarios such as What-If or Ad-Hoc planned and is there an interest in testing different allocation methodologies versus the existing live production models?
  5. Are the theoretical limits associated with the Block Storage Outline (BSO) being approached?
  6. Is the process for updating the Standard Profitability staging tables considered to be time consuming and/or is the automation for populating the staging tables viewed as complex or poorly understood?
  7. Are there currently other Management Ledger models in the organization and is there a need or desire to achieve communization of platforms?
  8. Is there an objective to move applications to the Cloud?

 

What are the steps to migrate?

If the answer to any of the above questions is yes, then there is a potential opportunity to convert a Standard Profitability model to Management Ledger. In such a case, a prototype to test the concept should be created.  This prototype should be loaded with a sample of data and rules, typically for at least one POV, and calculated and validated.  Though each situation will have unique requirements, the overall steps are as follows:

Prototype Build -> Rules Creation -> Testing -> Validation -> Adjustment -> Migration

General Steps to Migrating to Management Ledger

  1. Migrate the Standard model to the same environment where the Management Ledger test will be built.
  2. Run a calculation of the Standard model to obtain a benchmark performance time.
  3. Create a new cube and database and copy the dimensions from the existing cube. A new Master application should be created and the dimensionality copied from the existing Standard Profitability Master application. This is so that the dimensionality from the calculation cube isn’t used, in order to avoid duplicate dimensions.
  4. Copy the dimensions from the old to the new cube. Make Cube Outline Updates.
    • Change the NoMember dimension member in each dimension to NoDimensionName.
    • Determine the dimension for the Drivers, usually the DataType or Account dimension.
    • Add the drivers from the Measures dimension to the Account or a DataType dimension.
    • Delete Measures and AllocationType dimensions (used with Standard model).
    • Add the Rule and Balance dimensions (used with Management Ledger models.
    • Add UDAs for potential rule filtering requirements.
    • Should both Source and Target allocation details be required for reporting, dimensions may need to be duplicated or split, such as in a case with Initial Cost Pool and Final Cost Pool.
  5. Create a new Management Ledger Profitability application that references the new cube.
  6. Deploy the Management Ledger Essbase Calculation engine.
  7. Choose and create a single POV to start.
  8. Import data from the existing cube to the new one utilizing the various methods available such as free form loading without rules, structured loading with rules, spreadsheet add-ins such as SmartView or other tools such as FDM/FDMEE. Note: For PCMCS, flat files of dimensions and data are employed.
  9. Document the allocation rules in a template.
  10. Enter the allocation rules through the ML user interface.
  11. Run Model Validation to check the new Rule Sets and Rules for errors before calculating.
  12. Launch a calculation. Start with running a single rule.
  13. Validate the Results. Progressively select more rules for successive calculation as rules are validated.
  14. Adjust methods iteratively.
  15. Create and update a report to demonstrate the validations to end-users as well as how the results are consumed.
  16. Migrate, once validation is complete including acceptability of both the results values and the processing times.

 

Some thoughts on building allocation rules

Upon having a Management Ledger outline, the allocation rules from Standard should be constructed through the user interface. There should be an association between the Stages in a Standard model versus the Rule Sets in a Management Ledger.  As a starting point, the Rule Set sequence flow should match the stages, though it may be found necessary to break the stages into multiple rule sets.

1-16.6.png

Once the rule sets are determined, the rules themselves should be documented in a template (Excel, Word, etc.) that is easy to manage and understand. The example that follows shows the dimensionality of the Source, Destination, Driver Basis, and Source Offset.

This template becomes part of the documentation of the prototype. Upon completion of the template, a user should build the rule sets and rules in the Management Ledger interface.  One of the key benefits of Management Ledger is to reference parent level values in the assignment rules.  This provides the ability to create many-to-many source-destination associations with few keystrokes.  This not only saves time in initial set-up, but also makes the entire process data driven such that when new dimension members such as new accounts, cost centers, products, or customers are added, the allocation rules automatically accommodate them without the need for editing or updating.  The ability to select at the parent level also reduces the need for automation routines of the types that are frequently created in Standard Profitability implementations, such as those used to update staging tables (Management Ledger does not have staging tables).

Users should start with referencing the highest-level parents to make the process as automated as possible. If performance becomes an issue, it may be necessary to reference mid or lower level parents.  Rules should be tested iteratively, i.e. run individually and then in groups to validate both the answers and to track processing time.

If calculation times exceed requirements or expectations, then start moving references to lower level parents. Avoid going to children as that will increase maintenance in the future.

Validation Concepts

Use the Rule Balancing Report to validate the cost flow and confirm that allocations in and out match expectations. Users should also generate a set of SmartView queries from the control HPCM-Standard Model and compare those to a set of SmartView queries from the HPCM-ML prototype.  Input and Stage amounts from HPCM-Standard should compare to Rule Set amounts in HPCM-ML, including checks that rule sets are using drivers correctly.  Calculation time and performance should also be tracked and benchmarked.

1-16-7

Conclusion

The advent of HPCM Management Ledger in both the on premise and cloud-based versions provides organizations with an opportunity to consider their existing solution and whether a migration to Management Ledger is warranted. Multiple considerations must be evaluated in this decision, and a prototype-based assessment is recommended as part of the process.  Edgewater Ranzal provides an Assessment service offering to assist organizations with this evaluation, as well as a subsequent implementation.  With over twenty experienced full-time consultants across the Americas and EMEA, and with more than twenty-five successful HPCM projects delivered since 2009, Edgewater Ranzal is the leading Oracle partner in delivering all versions of HPCM. Its comprehensive multi-product delivery approach can incorporate other tools such as Planning, DRM, FDMEE, & OBIEE.  These qualifications, along with its close relationship with Oracle Development, make Edgewater Ranzal the premier partner for client success.

 

Techniques for Creating, Loading, and Optimizing a Simple Essbase ASO Application

A couple of recent projects have required us to build an Essbase database to provide a subset of upstream system data for downstream consumer systems such as Hyperion Profitability and Cost Management (HPCM).  The process included dimension updates, data loads and custom calculations. Essabase Aggregate Storate Option (ASO) was the chosen Essbase technology because we were potentially dealing with large data volumes, relatively simple hierarchy structures, and only a small number of custom calculations that could be easily modeled in MDX within minimal performance impact.

The principle was that an overnight batch would be used to completely rebuild the ASO cube each night, including any metadata restructures that were necessary, followed by a full reload of data.

The high level process is as follows:

The starting point was to use a ‘stub’ application as a template for the metadata rebuild.  This is an ASO Essbase application with all dimension headers present, all POV dimensions present (Years, Periods, Scenarios etc), and all volatile hierarchies represented by the hierarchy headers only.  This ASO application serves as a “poor man’s MDM” which allows us to have application, dimension and hierarchy properties all pre-set.  The main advantage of the stub outline is that it creates a natural defragmentation of the target ASO application which improves query performance, and reduces dimension build times to the minimum. This is analogous to a relational database where you want to ‘truncate’ tables and/or compress, as opposed to deleting and reading all the time – there is a gradual growth.  A good tip is to defragment build dimensions in order from smallest to largest in terms of volumes.

A sample ‘Stub.otl’ outline looks something like the following. In this case, the stub outline is modeled after the new embedded Fusion G/L Essbase cube:

As can be seen, the volatile dimensions (Budget centre, Balancing Entity, Accounts, etc) are each populated with a single hierarchy header (e.g. BE_dummy) whereas the static dimensions (AccountingPeriod, Balance Amount etc) are complete, and will not be the subject of a dimension load in the MaxL.  Static dimensions which contain members with MDX member formula will persist (although the formula will not necessarily validate at this stage as they may depend on members that have not yet been rebuilt).

The first part of the batch process is to use this Stub outline to replace the outline in the ‘user’ ASO cube (i.e. the cube that will be restructured and loaded with data).  The MaxL will clear data & replace the .otl file in the user application with the .otl file from the ‘stub’ application

A simplified version of the MaxL is as follows (normally passwords would be encrypted):

This simply copies the Stub.otl file into the ‘user’ ASO cube database folder & names it with the target database name – it will be available as soon as the application is reloaded.

The next section in the MaxL would be a standard dimension build of those volatile dimensions – the primary consideration when building the hierarchies is that the ASO restrictions on hierarchies are met otherwise the outline will not verify.  This is not covered here – we assume that incoming master data is pre-validated to meet these requirements, but the summary of dimension rules for ASO is as follows:

  • ASO dimensions can contain hierarchies of 2 types – ‘Stored’ or ‘Dynamic’
  • A dimension must be tagged as Multiple Hierarchies Enabled’ or “Dynamic” if it contains two or more hierarchies
  • The first hierarchy in a dimension where Multiple Hierarchies enabled is specified  must be defined as a ‘Stored’ hierarchy
  • Stored Hierarchies are generally only additive as they only allow the + or ~ consolidation operators
  • Dynamic Hierarchies can contain any consolidation operators, and members can contain formulas.
  • For alternate hierarchies, where shared members may be required, Stored hierarchies can only contain one instance of a member (to avoid double counting), but subsequent Stored hierarchies can contain members previously defined in previous stored hierarchies

Once metadata has been loaded, the data load can be carried out.

Once this is complete, we have a fully loaded ASO cube, which we can retrieve data against using either SmartView or an Essbase report script (for example, when we are supplying filtered data to our downstream systems).

The example Smart View retrieve template below is a straightforward report with periods as columns and 550 rows of level 0 Budget Centres, with all other dimensions set as filters.

The Essbase application log shows that the above SmartView query took over 16s to execute.  This report layout may or may not be representative of real world queries / reports but the object of the exercise here is to speed this up for in-day usage.

ASO databases do not use calculation scripts to consolidate the data so the traditional BSO approach to consolidation cannot be used.  Instead, ASO will attempt to dynamically calculate upper level intersections, which, while resulting in much faster batch processing times, may result in longer than necessary retrieval times.

What we can do to improve this situation is use the ‘Query Tracking’ facility in ASO to capture the nature of queries run against the ASO cube, and build retrieval statistics against it. These statistics can then be used to build aggregation views tailored to retrieval patterns in the business.

This relies on us having some predefined definitions of the kinds of queries that are likely to be run – SmartView report templates, Web Analysis pages & Financial Reports definitions will all be suitable.

In this example, we use the above SmartView template as a basis for creating an Essbase Report script as follows:

This report mimics the SmartView template, and we use it during the overnight batch to capture the query characteristics using Query Tracking. One of the reasons to use report scripts is that if you use the query designer (or the Spreadsheet Retrieval Wizard if you are using a REALLY old version of the Excel Add In), it can save a report script output. MDX queries will have a similar affect.

The sequence of MaxL steps is as follows:

  • Switch on Query Tracking
  • Run one or more Essbase Report Script(s)
  • Run ‘execute aggregate process’ command to create aggregate views

The MaxL to accomplish this is as follows:

The ‘execute aggregate process’ command is issued with the ‘ based on query_data’ option to tell Essbase to use query patterns picked up by Query Tracking to build the aggregation views.  Essbase will build as many views as necessary until the ‘total_size’ limit is reached.  This limit may need tweaking so as to give the desired improvement in performance whilst also conserving disk space (which may get swallowed up with larger ASO cubes).  The particular example also runs in a matter of seconds, but the addition of more sample reports needs to be managed to ensure that the batch run time does not exceed its window.  It should be noted that one can process hierarchies without the query tracking, but there are restrictions on what alternate hierarchies get processed, and this is a very good technique when you are trying to improve performance on “alternate rollups”.

When this has been executed, users should see an improvement on query performance.

Our SmartView query was rerun, and the log file demonstrates the reduction in query time to less than 1 second :

This approach lends itself to situations where the ASO outline is likely to change frequently.  Changes in metadata mean that aggregation views created and saved in EAS cannot necessarily be reused – new level 0 members will not necessarily invalidate the aggregate views, but new upper level members, or restructured hierarchies definitely will invalidate these views. The rationale for this is because the ASO aggregation engine constructs multiple “jump” points based on the most recent level hierarchy  –  if I were going to oversimplify what was happening in a BSO world, imagine level zero stored, level one as dynamic calc, level two stored, level three as dynamic calc, and level four stored. In any instance, there would never be more than one level of dynamic calc. I don’t know if this is still the case, but this may be why ASO cubes seem to like symmetrical vs. ragged hierarchies a bit easier – it makes the derivation of what should be calculated vs. dynamic easier.