Oracle’s ARCS Patch 1812 and Patch 1811 Review: Gazing into the Crystal Ball

Peruse the Account Reconciliation Cloud Service (ARCS) forums on Oracle’s Cloud Customer Connect and you’ll notice a theme: Transaction Matching. Questions, comments, and critiques have been flooding in from across companies and industries, clients and consultants alike. Combine this with Oracle’s game-changing announcement of the EPM Cloud price simplification plan teased for 2019 – that is, the strategic move to strictly sell bundled EPM Cloud products in the near future (more on this another time – it’s a doozy) – the changes released for ARCS in Patches 1811 and 1812 could not have come at a more opportune time. Furthermore, these changes provide a sneak peek into Oracle’s crystal ball of what’s to come.

The WHAT has come: Changes for ARCS at the end of 2018

The most important change to ARCS from the 2018 season finale, Patch 1812, is the shift from having separate reconciliations between Transaction Matching and Reconciliation Compliance to one standardized use of Profiles. This is configured through the new reconciliation methods provided in Formats (Balance Comparison with Transaction Matching, Account Analysis with Transaction Matching, and Transaction Matching only).

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 1

The implication is that Transaction Matching reconciliations receive all the benefits that previously only Reconciliation Compliance enjoyed, including but not limited to: bulk uploads/updates to Profiles and reconciliations, access to new Workflow options such as Reviewers and Teams, and detailed filtering options including the more hidden statistical metrics (such as attributes related to count, etc.). It is important to note, though, that these new features will almost exclusively relate to new reconciliations using one of the two ‘*with Transaction Matching’ format options, as seen below. Still, the opportunity for clever design is there.

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 2

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 3

Furthermore, to support this change, Period will now be shared between the two feature sets. Additionally, reconciliations that are performed in Transaction Matching will now utilize their period-end Balances loaded to Reconciliation Compliance. While historically there have been business processes put in place to ensure that the balance loaded to Transaction Matching equaled the balance loaded for the month-end reconciliation in Reconciliation Compliance, patch 1812 ensures that a system process governs the data’s integrity – certainly a more reassuring thought.

Two additional under-the-radar features introduced in Patch 1812 are (1) the ability to have Workflow that includes multiple members while not requiring an order precedence to the work and (2) the option to now have end-users approve their own re-assignments, reducing the administrative bottleneck. These changes provide value-add functionality that demonstrate Oracle’s willingness to listen to customer feedback even during these more “stuffed” patches.

The last item to mention was actually included in Patch 1811. In Transaction Matching, a text file can now be generated with the transactions or adjustments from the tool which can then be uploaded to the ERP source systems as a journal adjustment. This has been an ongoing request, and I am happy to see it finally actualized.

The WHAT does it mean: Implications and Expectations for ARCS in 2019

Transaction Matching’s relative strength to its competitors is becoming increasingly apparent, as Oracle continues to sure up areas in need of support while also providing updates that show a sensitivity to market demand. The move to unify Transaction Matching and Reconciliation Compliance is not a new idea, as Patch 1805 made apparent with the uniting of the two UIs (and much more – see Oracle Product Management’s webinar update here), but nonetheless is a bold one that I anticipate will pay dividends. The automatic conversion of Transaction Matching reconciliations to Profiles is a nice touch too, making the transition an easier pill to swallow for skeptical clients who I am sure were not eager to pay expensive consulting fees for this. Even smaller changes such as providing a space for strictly manual matching (i.e. without Auto Match rules; Patch 1811 change) demonstrate ARCS’ commitment to be an approachable and modular product that grows with your company – a benefit I have consistently touted in the past, and I expect to continue to do so in the future.  More details about the benefits of ARCS are shared in the posts A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS) and Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100”

Changes continue to come to ARCS that only slowly trickle, if at all, down to Account Reconciliation Manager (ARM). This was true for the Variance Analysis reconciliation method which arrived in May 2017 for ARCS, but not until Dec 2017 for ARM, and it is a fair guess that this will be true for the aforementioned “All Preparers” and “All Reviewers” workflow options and end-user re-assignment configuration setting. Combine this with more and more dollars being invested in Transaction Matching compared to Reconciliation Compliance (from where I’m looking, anyway), and the message is clear on who the favorite is in the Oracle product family. While ARM contains strong functionality as an on-premise option, expect the functionality gap to increase compared to its Cloud counterpart.

Lastly, the inclusion of a journal adjustment export out of Transaction Matching is a combo solution: a “we can do that too” to product competitor Blackline’s existing functionality as well as a demonstration of Oracle’s willingness to think outside of the product. This highlights ARCS’ flexibility as a tool capable of being used within other processes. In fact, the Oracle EPM Cloud ecosystem is one of ARCS’ biggest strengths over its competitors.  I would love to see this journaling ability out of Reconciliation Compliance as well which would provide the functionality to most ARCS clients. Regardless, this is a step in the right direction.

This post has been cross-posted on the #DataRestless blog site – read it here and other Oracle-related posts as well.

Implementing Zero-Based Budgeting: The Requirements

A Culture Change and a Centralized System

The first post in this 3-post series – Implementing Zero-Based Budgeting: Benefits, Myths, and Goals – covers the benefits of zero-based budgeting. To summarize, it enables you to achieve long-term savings that result in sustainable growth and holds your financial analysts accountable for the cost figures they approve and how they are managing the overall budget. This allows more effective recognition of any unwanted costs and how you that money can be shifted into other growth areas within the company.

However, to reap the benefits of a zero-based budgeting program, a culture change is needed first at certain levels within the company. The goal is to eventually have the entire company complete this culture shift, but it is best to start small. Along with a change in culture, a centralized reporting system needs to be created as well to provide teams the ability to share real-time numbers with each other to achieve the goals of this new budgeting program.

Better Than a Quick Fix

What exactly is meant by a culture change? This means starting small and fostering this culture change in other departments starting with Finance. To be successful with this new program, other departments will eventually have to jump on board with this new budgeting approach. These departments will need to step up in analyzing their own costs and how they can save more without diminishing their capabilities.

For example, while financial analysts talk to the shop floor to see where costs can be reduced, the HR department should work with Finance to determine how it can become leaner. Moreover, the IT department should take the lead on negotiating with its vendors to find any areas that can be saved. These are just a few examples of how different departments can step up to the plate; implementing a successful zero-based budgeting program will requires team effort.

Changing the culture doesn’t happen overnight. Senior leaders should take the lead in fostering this change. To ensure that everyone is on the same page, managers need advocate the new approach within their respective departments.

Incentives also help teams to buy into this new budgeting approach.  Although incentives for growth metrics may already exist, additional incentives can effectively encourage staff to find ways to reduce costs for the metrics they manage.

Some examples of incentive metrics are the realized ROI based on the requested capital expenditure and the total cost saving dollars resulting from a zero-based budgeting program. For the former, this can mean moving to the Cloud to save money or reducing redundant tasks by introducing centralized software. For the latter, it can be exemplified by achieving a 10% cost reduction per phone.

Best Practice to Achieve Success

A crucial component of the success of a zero-based budgeting program is an officer who governs the entire process from start to finish. This individual (or team) should contain deep knowledge of the budgeting process. Naturally, s/he will not know the ins and outs of each department, so that is why s/he needs to be an ambassador to department leaders. The officer will also provide oversight to ensure that past bad habits of budgeting do not return to plague this new program. And lastly, s/he must be dedicated to the craft of continuous improvement which means seeking outside counsel when needed.

As mentioned earlier in the post, a culture change needs to be accompanied by a centralized reporting system. Alithya has helped clients implement Oracle Planning and Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS) and overcome the deficiencies of Excel-based models. These models lose sight of what the true cost numbers are because past budgets are simple anchors of history rather than detailed breakdowns of cost. Moreover, these numbers become siloed within the vast library of Excel models. With Oracle PBCS or EPBCS, budgets can be highly surgical and help leaders in the company pinpoint reductions.

A centralized system allows the capture of all changes in a single location in real-time, and it provides insight into how effectively managers seek cost savings. This can be used as a key indicator to determine if their actions are in line with this new methodology.

Furthermore, centralization not only holds managers more accountable, but it also empowers them to create innovative cost-saving solutions. Driven by incentives, staff will burn with a clear purpose to find new ways to achieve sustainable growth for the company and be rewarded for hard work.

Recapping What It Takes to Achieve ZBB Success

The goal is to create a cost savings culture that allows more capital to be invested into growing parts of the company. To be successful, follow the best practices outlined, starting with a culture change within the company and giving your teams a centralized PBCS and EPBCS system to more clearly see all data points. The hard work does not stop here, though! The next post delves into setting up a zero-based budgeting system.

Implementing Zero-Based Budgeting: Benefits, Myths, and Goals

If you are in the finance world, then you probably have heard of zero-based budgeting. Investopedia defines zero-based budgeting as “a method of budgeting in which all expenses must be justified for each new period. The process…starts from a “zero base,” and every function within an organization is analyzed for its needs and costs.”

There are many reasons that financial professionals decide to use zero-based budgeting. For one thing, it goes hand-in-hand with a centralized system where information can be shared – something at which Excel spreadsheets are terrible. Furthermore, developing a centralized system enables you to scale to your needs as your company grows. Lastly, it enables financial analysts to spend more of their work week analyzing data instead of curating a financial system and worrying if the numbers match.

At Alithya, we have found with our past clients that a successful zero-based budgeting implementation resolves numerous problems. The two main things clients hope to achieve is growth across multiple business units and developing sustained cost reduction. With zero-based budgeting, you can earn long-term savings that can directly translate into sustainable growth.

Earning Long-Term Cost Savings

Zero-based budgeting becomes a daily exercise in cost savings for your financial teams. One method in achieving cost savings is renegotiating costs. For example, instead of taking the run-rate of 3% from last year’s numbers, perhaps you can contact your vendors to bargain for a better deal or switch to a different vendor with a more competitive price. Or how about having your analysts ask the IT department why it costs $38.03 per phone? What makes up that entire $38.08? Don’t assume that there aren’t any negotiable components of a cost.

The reason zero-based budgeting is so effective at long-term savings is that it is not a one-off fix. Many teams tend to implement one-off fixes, and then find that those fixes do not provide sustainable cost savings. A common example is offshoring your call center which might get you an immediate win in the cost column. However, this strategy typically reduces customer service quality while also limiting your ability to evolve with your business as it grows.

When enacting this type of program, you will analyze the costs of your business at every level. This may seem tedious, but what you will find is a clearer understanding of where your money is going. This can mean acquiring a greater understanding of contract labor costs as well as improving purchasing and procurement procedures, just to name a few. Moreover, when properly implemented, zero-based budgeting can reduce SG&A costs by 10 to 25 percent, often within as little as six months,” according to McKinsey & Company.

Debunking Myths Surrounding Zero-Based Budgeting

There are many myths surrounding zero-based budgeting that have sadly created an artificial barrier that CFOs and their teams do not want to cross. Many financial professionals think that it means cutting the budget down to the bare bones, but rather, a zero-based budgeting program analyzes costs from the top-down. Moreover, it is the CFOs’ duty to outline cost-cut targets so that their team’s efforts are focused.

Another misconception is that zero-based budgeting only helps with cutting the costs of SG&A. Actually, it can do much more, such as breaking down the Cost of Goods Sold (COGS) and help teams make investment choices on the capital expenditure with the greatest ROI.

Just because your business is not in decline or stagnating doesn’t mean that you can’t adopt a zero-based budgeting program. If you are already achieving growth, you can use this type of budgeting method to keep the overall business leaner so that you can provide more runway for growing business units.

Do you really start from zero? This is a common question that we are asked, and many people think because of its name that you do always start from zero. Technically, this is true, but this is the core component that drives the cost management culture change that will be introduced in the next post in this series.

However, not all things have to start from zero. At Alithya, we have been through many implementations where parts of the P&L are driver-based or zero-based. This can be achieved with a detailed, structured, and interactive system (like Oracle PBCS/EPBCS) that gives you real-time feedback.

How Does Oracle PBCS and EPBCS Help Achieve ZBB goals?

The main feature you acquire when you implement an Oracle PBCS or EPBCS system with your zero-based budgeting program is deeper analytics. This data enables you to dig into the “why and how” of your P&L.

For example, you could pose the question what driver did they use? Did they just simply take last year’s actuals and add 3%? Did they take a cost-per-head and budget it manually, or did they take the easy way out? All are important questions that force finance teams to be more accountable when it comes to everyday decisions.

Recapping the Benefits of ZBB

By implementing a zero-based budgeting program with a centralized system, you can hold your analysts more accountable to cost figures while making them own up to how the costs are managed. It allows you to recognize any unwanted costs that can be diverted into certain growth areas as well as breed a culture of cost reduction and visibility. The latter requires that you to start a culture change within your team. It is an essential part of having success with a zero-based budgeting program which is why we will cover it in greater detail in the next post.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.

 

Demystify the Balance Dimension in Profitability and Cost Management

Management Ledger models, whether Hyperion Profitability and Cost Management (HPCM) or Profitability and Cost Management Cloud Service (PCMCS), have been around for a few years, but I still receive emails asking for help with figuring out where the results are coming from. This request is often related to a lack of understanding of the Balance dimension. Here are some key pieces of information regarding this system dimension, how it works, how it should be used when defining allocations and integration jobs, and how to leverage it to troubleshoot your allocations.

Before we have a look at each member within this dimension, let’s go over some basic rules that govern the creation of an HPCM or PCMCS Management Ledger (ML) application:

  1. All HPCM or PCMCS ML applications must contain just one dimension named Balance
  2. Members and their properties cannot be edited or removed.
  3. You don’t need to import a file in order to load/setup the Balance dimension; members are created automatically when deploying an application for the first time.
  4. You can choose to rename the Balance dimension (translate it into another language, for example) when you first set up the application in PCMCS.

For the most part, the Balance dimension members are quite easy to follow and understand, but familiarity with usage guidelines helps to avoid issues during development and supports troubleshooting.

Demystifying the Balance Dimension in PCM - Image 1

  • Input — Used to store data input/pre-allocated data sets, whether these are pool or driver data sets. Data is generally loaded against this member in combination with the NoRule member. Input can be populated through custom calculations, but it is generally advised to keep it dedicated to valid data loads/input rather than for storing calculated or allocated results.
  • Adjustment In —Adjustment In can be used for manual adjustments to the Input data prior to running allocations. In this case, the Adjustment In data will be loaded against the NoRule member. Any manually submitted data on the Adjustment member against a Rule ID member may be eliminated during the subsequent data loads and calculations. Adjustment In can also be used during custom calculations to store intermediary values or calculated driver data.
  • Adjustment Out —Same usage as for Adjustment In, but with a negative data value.
  • Allocation In — This member will be populated against the Destination or Target intersection for the allocation rule.
  • Allocation Out —This member will be populated against the Source intersection of the allocation rule and the corresponding Rule ID member, or against a predefined “Offset” intersection that is custom defined for a given rule.
  • Allocation Offset Amount — Displays an amount that further reduces an Allocation In member, if one was used in addition to the Allocation Out. I have provided an example of how this member is populated and used in a lower section of this post.
  • Net Change — represents the total change for a given intersection, regardless of alternate offset actions.
  • Net Balance – sum of Input (initial data loaded) and any Net Changes made to the same intersection.
  • Remainder — Displays the difference between Allocation In and Allocation Out plus Allocation Offset Amount, if any.
  • Balance — The amount resulting when adjustments, allocations, and offsets are considered.

Rules assign funds to destinations based on the way you have defined the allocation logic (member selections, sequencing, concurrency, etc.). “Allocations in” and “allocations out” are being generated upon executing the calculations of the Profitability model. Each pair of adjustments and allocations (the “in” and the “out”) should result in a zero sum in order to balance the transaction. The Input member is affected by each adjustment and allocation. The difference between what was taken from Input and what remains at the end of an allocation will be accounted for in the Remainder.

The Remainder member is the source of your allocations, not the Net Balance member, as most would think.  Remainder takes into consideration alternate offsets and ensures we do not perform a double booking or a double allocation of the same data source, regardless of where the offset was applied.

To further explain the Balance dimension usage, I have used an example from the Bikes default application BksML30, which can be deployed into PCMCS through a few clicks.

The original application had only one adjustment Rule populating the Adjustment In member. I have copied that rule and reused it to demonstrate the same usage for the Adjustment Out member. Remember the adjustment out aggregation operator is still +, so if you want to offset data sets, you must use the appropriate signage for your data; in other words, negate the result either via a multiplication with -1 or by simply adding a – to the formula.

The new ruleset contents will look like this:

Demystifying the Balance Dimension in PCM - Image 2

Our initial data set is loaded on the Input/No Rule combination for the two accounts – Rent and Utilities – on the intersection with Corporate Entity.

The data adjustments are stored against Adjustment In and Adjustment Out.

Demystifying the Balance Dimension in PCM - Image 3

In order to further illustrate how to correctly follow the allocation process, I split the original Reassignment rule into 2 rules, each dedicated to its own account. I also updated the metadata by adding two new Account siblings to Rent and Utilities as offsets for each account.

Alternate offsets are simply intersections of members where you would like to store the offset data point, if it should differ from the source of the allocation.

The Remainder member demonstration is connected to the usage of alternate offsets, and before we go into the details of the numerical example, I would like to list out a few rules for setting up alternate offsets:

  • Alternate offsets are available for selection only in standard allocation rules. For Custom calculations, your Offset custom calc would have to be pointed to the appropriate “alternate” target.
  • All dimensions, including the ones predefined in the rule context, are repeated in the Offset screen as soon as you select “Alternate Offset Location.” You must select a single base level member for at least one dimension.
  • There is no “Same As Source” (SAS) option for offsets. The dimensions that must be offset on the Source intersections can be left blank in the Offset screen selections.
  • If each source member selection has its own offset, you will have to split the rule up into as many granular rules as needed in order to cover the individual offset selection. For example, if you have 6 accounts, each with its own offset account equivalent, you will have to create 6 standard allocation rules to create the individual offset selection for each account.

Going back to the numerical example and the usage of the Offset tab, in the update rule I have selected the below member intersections:

Demystifying the Balance Dimension in PCM - Image 4

The Source account was Rent, target is “Same as Source” (SAS), and the alternate offset account is FACOffset_Rent.

After the rules are executed, we will see the results below; focus on the Allocation Offset Amount member and the Allocation Out Member.

Even though the offset was applied to an alternate account for both Rent and Utilities, the allocation engine correctly identifies the Remainder of these two accounts as being 0.

  1. The first step behind the scenes is for the allocation to correctly distribute the data to the target intersections.
  2. The second step is to perform the offset on the intersection specified by the user, if different from the source intersection.
  3. The third step is to copy the Allocation Out value onto the Source Intersection members, on allocation Offset Amount member. This final step is performed via a custom calculation embedded in the PCMCS generated scripts which ensures there will be no double counting of pool data.

So even though we “moved” data from the Rent account, Corporate Entity, to other Entities, on the same target Account, the offset was performed on an alternate member. This allows us to create a report with Rent (Input), Rent (Allocation In) and FACOffset_Rent (Allocation out).

This is not a typical example of how alternate offsets are used from a functional standpoint, but it helps explain the mechanics behind the scenes. This alternate offset option is mostly used in cases where a Bill Out account and a Chargeback account will differ and allows users to trace which portion of a chargeback account is coming from different source accounts.

The final goal of an allocation is to generate a Remainder member with a value of 0. This ensures the total allocation of a pool data set, whether this was loaded or received from prior allocation steps. If the Remainder member has a positive value, then it is indicating that you have not fully utilized your pool data. If the Remainder member has a negative value, then you have overutilized your pool data which may be, in some cases, intentional.

Demystifying the Balance Dimension in PCM - Image 5

In situations where you will not give access to the PCMCS ML application to users who need to understand the various components of a data point flowing through the allocation steps, due to licensing costs or other considerations, the usage of alternate offsets throughout your allocation flow might be helpful.

When talking about reporting out of PCMCS ML, our clients always emphasize simplicity, and we often get requests to remove the Rule and Balance dimensions from final reporting solutions, to cancel the noise and give finance users solely the core information. In such situations, the usage of alternate offsets has proved beneficial as these finance users can still follow the flow and components of a cost without having to deal with the rule by rule detail. If further investigation is necessary, this can be pursued within the PCMCS ML model itself rather than in the external reporting solution.

If you need further help with figuring out the purpose and usage of the Balance dimension within PCMCS, email us at infosolutions@alithya.com. Our PCM Center of Excellence team is ready to share leading practices and industry-specific solutions that accelerate your ROI and expand the capabilities of your chosen profitability software.

Worry No More! Say Goodbye to Pain and Frustration when Submitting Service or Enhancement Requests with Oracle for PCMCS

While nobody likes submitting Service Requests (SR) on the Oracle support site, this is a necessary task that we must get comfortable with, whether our applications are on-premise or in the Cloud.  After 12 years of consulting, I can say that I have seen or pursued many wrong ways of submitting an SR which, in turn, yields results along similar lines – a lot of back-and-forth emailing with Oracle’s support staff, personal frustration, misinformation, and most importantly – time wasted on all sides.

Worry no more!  Here is a list of things you can do to avoid further pain and frustration when submitting Service Requests or Enhancement Requests with Oracle for Profitability and Cost Management Cloud Service (PCMCS).

  1. Where do I start when submitting SRs and ERs for PCMCS?

You can still use the generic Oracle Support website to open either an SR or an Enhancement Request (ER) with Oracle for Cloud applications, but the right way to do this is to first gain access to the Oracle Cloud Support website which looks slightly different and has a couple of new fields to complete. The email associated with the Oracle account should be the same email that has access to specific Cloud subscriptions.

Standard Oracle Support website

PCMCS Image 1

Cloud Support website

PCMCS Image 2

  1. Provide feedback

Login in to the Cloud application for which you want to create the SR or ER, and once you are logged in to PCMCS, navigate to your user name (top right) and select “Provide Feedback.” A new screen will appear enabling you to highlight the area of concern to provide context for the reason you are submitting the SR or ER.

Provide details around the area of concern. This gives context to the issue at hand and creates a reference for future troubleshooting. For example, if the issue is related to one specific Rule, ensure that the last screen open before you click on Provide Feedback is on the rule itself, or open to the job library listing the execution of the rule. You will only be able to highlight areas on the last screen open before launching the “Provide Feedback” screen.  The details you provide here will not automatically be copied into your SR. If you want to describe the issue in detail within this section, you can copy the same text within the SR itself – save it locally before submitting the feedback.

  1. Options for your feedback.

After you submit your feedback, a new panel will come up and will contain the following 3 sections:

  1. Environment: a listing of your Browser, Platform, Version, Locale, Resolution, Time zone, Cookies enabled (Y/N), URL of the instance, and the User Agent. You do not have to fill in anything in this section. All information is filled in for you.
  2. Plugins: a listing of enabled plugins, if any. You do not have to fill in anything in this section. All information is filled in for you.
  3. Confirm Application snapshot submission: this is the only section where you must provide input.

PCMCS Image 5You have a choice of Yes / No – depending on how comfortable you feel about Oracle using your daily maintenance snapshot for regression testing in upcoming releases. Giving Oracle access to your maintenance snapshots means you are agreeing to them using the model and any related data for their testing going forward. If your hierarchy structures and data are not sensitive, then you may choose to select “Yes.”  My personal preference is to select “No” and provide the static/current moment in time archived snapshot within the SR . When the SR is closed, the contents of said snapshot will be archived and not used for further regression testing.

  1. Generate a Diagnostic Report (UDR) ID

When clicking the “Submit” button on this screen, a unique alphanumeric reference is generated. This reference will be required when submitting an SR or ER on the Oracle Cloud Support website. Write down or, preferably, copy and save this UDR string of characters on your workstation in a txt file.

  1. Log in to the Oracle Cloud Support website and proceed with opening a new Cloud SR/ER.

Select the “Create Service Request” button on the lower left-hand side of your screen.

PCMCS Image 6

Select “Service Type” from the drop-down list of available Cloud services to which your user has access.

PCMCS Image 7

Once you have selected “Oracle Hyperion Profitability and Cost Management Cloud Service,” a listing of all available instances will be displayed in the new “Service Name” section:

PCMCS Image 8

Make sure you select the appropriate “Service Name” with the instance where you generated the related UDR (see previous steps).

Add “Problem Type” and select based on the type closest to your issue:

PCMCS Image 9

The above choices will not trigger related content or a list of options – this is merely to ensure that the ticket goes to the appropriate team during the investigation process.

In the “Problem Summary” section, reference the Cloud product for which you are creating the SR or ER. This will be the subject of your ticket, and it will help administer and keep track of multiple tickets at the same time.

  1. Attach all System Reports available for your PCMCS app.

To avoid multiple back and forth email exchanges with the Oracle Support staff, provide them with all the available information. Here is a current list of all available reports for troubleshooting PCMCS applications.

  1. Execution statistics for the last model / allocation execution connected to the SR – if SR is related to calc performance, calc troubleshooting or rule setup. (PDF or XLS format preferable)
  2. Program Documentation (with details; not with aliases) (XLS or PDF format preferable)
  3. Dimension Statistics (PDF format preferable)
  4. POV Statistics (PDF or XLS format preferable)

All these reports can be generated from PCMCS – Navigator menu – System reports.

PCMCS Image 10

  1. Attach the Diagnostic report

From the “Navigator” Menu, select “Application,” click on the drop-down in “Actions” and select “Export Supplemental Diagnostics.” This report is very useful to the development team troubleshooting your issue.

PCMCS Image 11

When selecting this report, a new job will be launched that can take anywhere between a couple of minutes to 20+ minutes, depending on the size of your application and the amount of logging involved.

An archive of the diagnostics reports will be generated in the File Explorer within the Application menu.  Some of the reports in this archive will be a repeat of the other reports mentioned in the previous step, but if you provide all this information simultaneously, the redundancy should not cause any issues. If you are not open to launching such process in your environment during business hours, and yet you still want to submit the SR in a timely fashion, you can skip this step and provide this report only upon request from Oracle Support staff.

  1. Error description

If you can replicate the error, capture each step via screenshots and save them in a Word doc. The earlier the support staff understands what you are dealing with, the faster the entire troubleshooting process will be completed.

Refer to menu options precisely as what they are called within PCMCS.

For example, to submit an SR or ER related to the Calculation Rules menu, refer to it as Calculation Rules – Rules Express Editing, as both names appear in the PCMCS menu.

PCMCS Image 12

  1. Establish the SR level appropriately.

There are 4 options to choose from, and you should choose based on urgency as well as level of importance.

PCMCS Image 13

Choose severity 1 and 2 only when applicable. You may be inclined to select such severity options so that your issue is resolved quickly, but use your own criteria to distinguish between something that is really a show stopper and something that is not. Time is of the essence for both you and the Oracle Development team.

When choosing severity 1, you will open your calendar for potential phone calls that can occur at any time, regardless of your time zone.

  1. My request is really an ER, not an SR.

If your SR is an Enhancement Request, provide a lot of supporting detail in the “Business Justification” section. Not doing so will delay the Enhancement Request submission by up to 2 weeks. If further business justification is requested, respond promptly to make things move along and ensure that your request makes it to the next patch release sooner rather than later.

Once an Enhancement Request is recorded, your SR will be updated with the ER ID (which will differ from the SR ID originally assigned the moment you submitted the ticket).  The original SR will be closed, and you can open a new SR quoting the ER ID 48 hours after the moment your request was accepted. The Support staff will confirm whether the ER will make it in the next monthly patch release.

  1. Bedside manners for SR/ER submitters.

Try to reduce the number of communications within the SR. Taking the above steps will get you closer to achieving a near-perfect SR submission. Be mindful about how to communicate efficiently. The higher the amount of back-and-forth communication, the more difficult it will be for the development team to follow the conversation trail and ensure efficient troubleshooting.

Whether you are a service provider or a PCMCS administrator who inherited an application at the end of a project implementation, we all tap the same Oracle Support resources which are, as are most things, finite. The more efficient your SR/ER submission is, the faster these resources provide a response with accurate and detailed troubleshooting steps. For any time-sensitive issues or further escalation, leverage your Oracle representative and your implementation partner. Their existing relationship with Oracle Product Management will help direct your query to the right resources and ensure your SR is not stuck because of lack of clarity regarding which team should own it. This will ensure that your SR/ER is fast-tracked to the appropriate team and given the right level of attention. For any critical issues you encounter with PCMCS or other Cloud subscriptions where there is no solution in sight, reach out to Alithya at infosolutions@alithya.com so that our team can provide a fast end effective assessment.

Using Subscriptions with EDMCS

As an earlier blog mentioned, the 18.07 release of Enterprise Data Management Cloud Service (EDMCS) delivered one eagerly anticipated piece of functionality: Subscriptions! And do not fear – these subscriptions are useful and do not involve a 1-year subscription to the Fruit of the Month Club (not that there’s anything wrong with that).

This blog post dives deeper into this new functionality, describes how it works, and highlights some lessons learned from utilizing Subscriptions with a current project involving multiple EDMCS Custom applications supporting multiple Profitability and Cost Management Cloud Service (PCMCS) applications.

Why Are Subscriptions Important?

Subscriptions are a huge step towards true “mastering” of enterprise data assets within a single master data Cloud platform. With EDMCS, it is important to build deployment-specific applications configured to the dimensionality requirements of the target applications to most effectively use the packaged adapters, validations, and integration capabilities. But in many cases, you also need to share common hierarchies across applications and avoid duplicative (that’s my big word for today) maintenance. After all, why have a master data management tool if you still must perform maintenance in multiple places? That’s just silly.

The answer to this dilemma is Subscriptions. By implementing Subscriptions, requests submitted to a primary viewpoint will automatically generate parallel subscription requests to subscribing viewpoints to automatically synchronize your hierarchy changes across EDMCS applications.

Note

This comment is important: “automatically generate parallel subscription requests.” EDMCS will not update a target, or subscribing, viewpoint behind the scenes with no visibility or audit trail to what has occurred. A parallel Subscription request will be generated along with the Interactive request that will be visible in the Requests window, along with the full audit trail and details that you find in an Interactive request. Even better, the Subscription request will generate an email and attach a Request File of the changes.

Nerdy Details

Views and Viewpoints

The first thing to really think about is the View and Viewpoint design of your EDMCS applications. Subscriptions are defined at the Viewpoint level, so you need to identify the source and target viewpoint for your business situation. With my current project, I have multiple EDMCS applications supporting multiple PCMCS applications. While the dimensionality is similar across the applications, the hierarchies vary, especially with the alternate hierarchies. So, it has been important to isolate the “common” or shared structures that should be synchronized across applications into their own viewpoint so that a subscription mechanism can be created.

Node Type Converters

You will likely need to create a node type converter. If the source and target viewpoints do not share a common node type, you must create a node type converter for subscriptions to work. In my situation, I had already created node type converters since I wanted to compare common structures across EDMCS applications, so the foundation was there to readily implement subscriptions.

Permissions

To create a Subscription, the creator must have (at a minimum) View Browser permission to the source view, View Owner permission to the target view, and Data Manager permission to the target application.

The Subscription assignee (this is the user who will “submit” the subscription request) must have (at a minimum) View Browser permission to the source view and Data Manager permission to the target application.

Creating a Subscription

Once the foundation is in place in terms of viewpoints, node type converters, and permissions, the actual creation of a subscription is easy.

Inspect the target viewpoint (the viewpoint that is to receive the changes from a source viewpoint via subscription), navigate to the Subscriptions tab, and click Edit. From there you can select the source viewpoint, the request assignee, and enable Auto Submit if needed. Save the subscription and you are all set.

  • Currently, there is no capability to edit an existing subscription. You must delete and add a new subscription to effect a change.
  • Any validation errors for your subscription will appear on this dialog as well. These are documented nicely in the Oracle EDMCS administration guide.

Using Subscriptions with EDMCS Image 1

Auto-Submit and Email Notifications

Emails will be generated and sent to the Request Assignee, whether Auto-Submit is enabled or not. The email will include details such as the original request #, the subscription request #, and how many request items were processed or skipped.

Using Subscriptions with EDMCS Image 2

Note

  • Remember, the subscription request will have a Request File attached to it. View the request file attachment to see details on why specific request items were skipped.
  • The request file is not attached to the email itself, only to the request in EDMCS.

Lessons Learned

Like I mentioned earlier, the foundation is important to making subscriptions work. And it all boils down to design and ensuring the building blocks of that foundation are in place:

Design, Design, Design!

  • The importance of dimension, view, and viewpoint design cannot be overstated. For each dimension, evaluate the primary and alternate hierarchy content and identify what will be shared across dimensions or applications and what will be unique to each dimension and application.
  • Based on that analysis, carefully design your viewpoints to enable subscriptions across EDMCS applications for hierarchies that truly need “mastering.”
  • As early as possible, identify the EDMCS user population along with permission levels for applications and views. This is important to identify the appropriate “Request Assignee” for your Subscriptions. I recommend creating a security matrix identifying each user and the permissions each will have.
  • Without a clear and well thought out design, you will find yourself constantly re-doing your views and viewpoints which, in turn, will cause constant rework of your subscriptions. The “measure twice, cut once” adage certainly applies here!
  • I am a big proponent of standard, consistent naming conventions to improve the usability and end user experience. The same holds true for Subscriptions. Consider using a standard naming convention for your viewpoints so it is clear which viewpoints have a subscription. It’s not obvious – unless you Inspect the viewpoint – that a subscription exists.
    • One approach I’ve been using is to name my source and target viewpoints identically with a special tag or symbol at the end of the target viewpoint name to indicate a subscription is present. I’m sure there are other and probably better ideas, but I find the visual cue to be helpful.
    • Perhaps in the future, Oracle will display subscription details when you hover over a viewpoint name (hint hint).

Node Type Converters

  • Ensure you have node converters in place
  • Make sure your node type converters are mapping all required properties.
    • I ran into an issue where updates to one property in my source viewpoint were not being applied to my target viewpoint via subscription requests, but all other property updates worked fine. The reason? I had recently modified my App Registration and added this property to a dimension’s node type. But my node type converter had already been created and wasn’t mapping or recognizing the new property. Once I updated my node type converter, the problem was solved.

Troubleshooting

  • The request files attached to subscription requests are a valuable troubleshooting tool. Status codes and error messages are included in these Excel files that are extremely helpful to determine why your request was not auto-submitted.
  • Inspect the Subscriptions on your viewpoints. Any validation issues will be displayed and are easily addressed. Typical Subscription validation errors include:
    • The request assignee no longer has the correct permission levels
    • The viewpoint no longer is active
    • A node type converter is missing

Conclusion

I have been looking forward to the subscription functionality in EDMCS and am pleased with it so far. Subscriptions are easy to configure, can be configured to auto-submit if desired, and generate emails to remind the requester a request has occurred and to act if the request was not submitted or request items were skipped. EDMCS Subscriptions are a big step forward to enabling true mastering of your enterprise data management assets!

Retro Reboot #1: Set It & Forget It – Scheduling FDMEE Tasks

As with most nostalgic items, reboots are the next best thing. From video game consoles to television shows, they are all getting a modern facelift and a new prime-time seat on television.  I have jumped on that band-wagon to revitalize a previous post authored by Tony Scalese: Set it & Forget It – Scheduling FDM Tasks.

As with most reboots, there must be flair and alluring content to capture old and new audiences. Since Oracle Financial Data Quality Management Enterprise Edition (FDMEE) has been in the Enterprise Performance Management (EPM) space for a while and has moved into the Cloud, this is a great time for its reboot!

Oh Great…A Reboot. Now What?

Scheduling tasks in FDMEE has never been easier. Oracle provides several ways to do this for a variety of out-of-the-box activities.  Is there a report that you want to run and email every hour?  Or how about a script that needs to run hourly?  Or maybe batch-automation every 15 minutes?  No worries!  FDMEE can handle all of that with out-of-the-box functionality.

Let us pause for a moment and determine what is needed to make this happen:

  1. Is there a business case and justification for what is about to be scheduled?
  2. Who benefits and how will they be notified of the results?
  3. Is there a defined frequency for which the activity must take place?

Getting Started

First, understand that the scheduling for FDMEE is built directly into the Graphical User Interface (GUI) anywhere you see the “SCHEDULE” button. Unlike the previous FDM counterpart which had it as an independent utility to be installed/configured, the ease of having it via the Web has removed some complexity.

A word of caution:  while this screen allows items to be scheduled, there isn’t a screen that shows “what has been” scheduled.  To do that, access to the Oracle Data Integrator (ODI) is needed, but more on this later.

Initially, the screen shows the types of schedules that can be created and their relevant inputs.

Retro Reboot Screen Shot 1

Below is a reference guide to outline FDMEE’s scheduling capabilities.

Schedule Type Inputs Notes / Examples
Simple TimeZone, Date, HH:MM:SS, AM/PM Single run based on the specified inputs.

 

Example:  Run 08/02/2018 @ 11AM

Hourly TimeZone, MM:SS Repeatable run at the specified time MM:SS time.

 

Example:  Run every hour, at the 22minute mark.

Daily TimeZone, HH:MM:SS, AM/PM Every day at the specified time.

 

Example:  Run every day at 11AM.

Weekly TimeZone, Day of the Week, HH:MM:SS, AM/PM Every specified day at the specified time.

 

Example: Run every Monday thru  Friday at  11AM.

Monthly
(day of month)
TimeZone, Date, HH:MM:SS, AM/PM Specified day at the specified time.

 

Example: Run on the 2nd day of every month at 11AM.

Monthly
(week day)
TimeZone, Iteration, Weekday, HH:MM:SS, AM/PM Specified interval and week day at the specified time.

 

Example: Run every third Tuesday at 11AM.

Why Does the Job Run Under My UserID?

That is because the system assigns the user’s credentials who created the schedule. What can go wrong with that, right?!  Well, if a user no longer exists or a password is changed, the existing jobs will no longer run.

The following considerations should be observed:

  1. Dedicate a service account that is not being used by an employee to be used for server/automation actions.
  2. This account can be a “native” user; since the account is only used internally for EPM products, having a domain account is not needed.
  3. Non-expiry passwords are best.

 It is Scheduled…Now What?

After the item is scheduled, what really happens? The action executes at the scheduled time!  Actions can easily be monitored via the FDMEE Process Details screen.  Now all the possibilities of scheduling the following can be explored:

  1. Data Load Rules
  2. Script Executions
  3. Batch Executions
  4. Report Executions

Also, as mentioned earlier, there is no way to see the batches inside of FDMEE. For that, information can be retrieved in a few ways.  The easiest way to see what is scheduled is to use the ODI Studio.

The ODI Studio provides details as seen in the screen shot below:

Retro Reboot Screen Shot 2

Any scheduled tasks will be listed under “All Schedules.” Simply double click them to obtain details related to that task.

Retro Reboot Screen Shot 3

Another effective option is to write a custom report that displays the information. My previous blog post, Easy Value with FDMEE Reports, provides further details of FDMEE report options and their value.  This would allow a report to be executed to provide a user-friendly report.

Seriously … What Now?

By now, you may have noticed from the previous blog post Scheduling FDM Tasks – A Second Option by Tony Scalese that the upsShell process is quite handy.  It allows other tools to control the FDM jobs…maybe through a corporate scheduler.  Now that most organizations have a corporate scheduler, the new FDMEE options below must be learned:

Command Purpose
Executescript.bat / .sh Executes an FDMEE Custom Script
Importmapping.bat / .sh Executes an import from text-file for Maps
Loaddata.bat / .sh Executes a Data Load Rule
Loadhrdata.bat / .sh Executes an HR Data Load Rule
Loadmetadata.bat / .sh Executes a Metadata Load Rule
Runbatch.bat / .sh Executes a defined Batch
Runreport.bat / .sh Executes a defined Report

*All files are stored in the EPM_ORACLE_HOME\products\FinancialDataQuality\bin\

In the example below, the command, when launched, executes a Data Load Rule for Jan-2012 thru Mar-2012:

Retro Reboot Screen Shot 4

There still must be a better solution…right? Things to overcome:

  1. What happens if the scheduler is Windows-based and the server is Linux?
  2. How does a separate scheduling server communicate with EPM? Does it have to be installed on each EPM Server?
  3. How can we monitor and get details of a job once it is kicked off?

What Happens if You Don’t Want to Run the .BAT/.SH Files?

You’re in luck! With the introduction of new functionality to FDMEE, RESTful APIs are also now available.  With the RESTful APIs, not only can you execute a job, but you can also loop and monitor for the results.  This enhances the previous .BAT/.SH file routines and provides a cleaner and more elegant solution.

Command Purpose
Running Data Rules Execute a Data Load Rule
Running Batch Rules Execute a Batch Definition
Import Data Mapping Import Maps
Export Data Mapping Export Maps
Execute Reports Execute a Report

*URL construct: https://<SERVICE_NAME>/aif/rest/V1

The below example is just querying for a process:

Retro Reboot Screen Shot 5

The Future…

As Oracle moves forward to enhance the RESTful APIs, many doors continue to open for FDMEE and tool scheduling. At Edgewater Ranzal, we fully embrace the RESTful concept and evolve our solutions to utilize this functionality.  The result is improved support and flexibility of FDMEE and the future of Oracle Cloud products.

Contact us at info@ranzal.com with questions about this product or its capabilities.

The Oracle Profitability and Cost Management Solution: An Introduction and Differentiators

What is Oracle Profitability and Cost Management?

Organizations with world class finance operations generally can close in a minimal number of days (2-3 in an ideal organization) and have frequent and efficient budget and forecast cycles while also visiting different ‘what if’ scenario analysis along the way. These organizations often deliver in-depth profitability and cost management analysis reports at fund, project, product, and/or customer level, completing the picture of an accurate close cycle.

Oracle offers packaged options in support of all these finance processes, but the focus of this post will be Profitability and Cost Management (PCM).

One of the most painful and time-consuming processes for any business entity is PCM analysis. The reasons why cost allocations processes are time consuming are too many to count – from model complexity to data granularity, driver metric availability, rigidity of allocation rules, delays with implementing allocation changes, and almost impossible-to-justify results. Instead of focusing on the negative aspect, let’s focus on what can be done to alleviate such pain and energize the cost accounting department by giving it access to meaningful and accurate data and empowering users through flexibility to perform virtually unlimited “what if” analysis.

The PCM Journey

The initial Profitability and Cost Management product, like almost all Oracle EPM offerings, was released on-premise in July 2008 and is known as Oracle Hyperion Profitability and Cost Management (HPCM). 10 years later, HPCM continues to deliver an easier way to design, maintain, and enhance allocation processes with little to no IT involvement as it has since it was initially launched, but with a greater focus on flexibility and transparency. The intent for HPCM was to be a user-driven application where finance teams would be involved beginning with the definition of the methodology all the way to the steps needed to execute day-to-day processing. Any cost or revenue allocation methodology is supported via HPCM while graphical traceability and allocation balancing reports support any query from top-level analysis all the way down to the most granular detail available in the application.

There are 3 HPCM modules available on-premise today. Each was designed and developed for a different type of allocation methodology or complexity need:

  1. Simple allocations – Detailed Profitability (a.k.a. single-step allocations. Example: From Accounts and Departments, allocate data to same Accounts, new target Departments, and to granular Products/SKU based on driver metric data. This module allows for a very high degree of granularity with dimensions >100k members, but it does not cater to complex driver calculations or to allocations requiring more than 1 stage).
  2. Average to high complexity allocations – Standard Profitability (a.k.a. multi-step allocations of up to 9 iterations/stages, allowing for reciprocal allocations. Example: Allocations from accounts and departments to channels, funds, and other departments. Allocation of results from previous steps are redistributed onto Products, Customers etc. Driver metric complexity is achievable with this module; custom generated drivers are available as well, but there are limitations regarding driver data granularity, granularity of allocated data, and overall hierarchy sizing).
  3. High complexity allocations – Management Ledger (unlimited number of steps, high number of complex drivers, custom driver calculations, custom allocations, more granularity, and increased flexibility in terms of defining and expanding allocation methodology). This is the last module added to the HPCM family and the only one available as SaaS Cloud Offering.

The Cloud is Your Oyster

In 2016, Oracle introduced the Cloud version of HPCM: Profitability and Cost Management Cloud Service (PCMCS).  PCMCS is a Software as a Service (SaaS) offering, and as with many of Oracle’s Cloud offerings, PCMCS includes key improvements that are not available in the on-premise version, and enhancements are made at a much faster pace.

There is currently no indication that the two HPCM modules – Detailed and Standard Profitability – will make their way to the Cloud, since increased allocation complexity as well as increased hierarchy sizing supported by the Management Ledger module caters to most, if not all, potential requirements.

The Management Ledger module included with the PCMCS SaaS subscription has a core strength in the ease of use and flexibility to change, enabling finance users to define and update allocation rules and methodologies via a point-and-click interface. While the initial setup is advisable to be performed with support from an experienced service provider, the maintenance and expansion of PCMCS (Management Ledger) models can be achieved by leveraging solely functional resources, in most cases. “What-if” scenario creation and analysis has never been easier. Users not only can copy data and allocation methodologies between scenarios, but they can also update the data sets and allocation steps independently from a standard scenario, generating as many simulation models as they need, gaining increased insight into decision making.

Standard Profitability models perform allocations in Block Storage Databases (BSO). While BSO applications are great for complex calculations and reciprocal allocation methodologies, they have the disadvantage of being limited in terms of structure or hierarchy sizing. This hierarchy restriction is not as pressing in Aggregate Storage Option (ASO) type applications, which is the technology used by Management Ledger. The design considerations for a Standard Profitability model are also significantly more rigid when compared with the Management Ledger module, which has no limitations regarding allocation stages, allocation sequencing, or a maximum number of dimensions per each allocation step.

Detailed Profitability models heavily leverage a database repository while any connected Essbase applications are used solely for reporting purposes. Initial setup and future changes, outside of the realm of simply adding new hierarchy members, will require specialized database management skills, and the usage of a single step allocation model is not as pervasive. Complex allocation methodologies may require the usage of Detailed Profitability models in conjunction with Management Ledger, but these situations represent the exception rather than the rule.

Why Should You Choose Oracle Profitability and Cost Management?

One of the key strengths for HPCM, available since it was released, and now included in PCMCS, is transparency – the ability to identify and explain any value resulting from the allocation process, with minimal effort. Each allocation rule or allocation step is uniquely identified, enabling users to easily navigate via the embedded/out-of-the-box balancing report to the desired member intersection opened through a point and click action in Excel (using Smart View) for further analysis and investigation. The out-of-the-box-program documentation reports identify the setup of each rule and can be leveraged for quick search by account, department, segment code, or any other dimension available in the application. The execution statistics reports delivered as part of the PCMCS offering enable users to quickly understand which allocation process is taking longer than expected and identify opportunities for overall process improvement or to simply monitor performance over time. These two out-of-the-box reports – execution statistics and program documentation – are the most heavily used reports during application development, troubleshooting, and particularly when new methodologies are developed. Users can quickly search through these documents, leverage them to keep track of methodology changes, and use them as documentation for training new team members.

Performing mass updates to existing allocation rules has never been faster. PCMCS contains a menu that allows end users to find and replace specific member name references in their allocations for each individual data slice, allocation step, or an entire scenario. A quick turnaround of such maintenance tasks results in an increased number of iterations through different data sets, giving the cost accounting team more time to perform in-depth analysis rather than waiting for system updates.

PCMCS-embedded analytics and dashboarding functionality is also a significant differentiator, enabling end users to create and share dashboards with the rest of the application users through the common web interface and without the need for IT support. Reports created in PCMCS are available immediately and without time consuming initial setup or migrations between environments followed by further security setup tasks.

A comparison of On-Prem vs Cloud will be available in a future post, so please subscribe below to receive notifications for PCMCS-related blog updates.

Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

In the previous post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, I showed you how to rebuild ARCS down to the Profile Segments to speed things up. This time we’re slowing everything down…

So grab a glass of wine and throw on your Marvin Gaye vinyl because we’re getting it on with ~~automation~~. Oh yeahhh…

The sexiest topic of account reconciliations (didn’t think you’d ever see that sentence, did ya?) consistently revolves around automation. Yes, ARCS provides a central repository. Yes, ARCS is auditable. YES, ARCS shows a traceable workflow throughout the reconciliation cycle. All of these features are highly useful and absolutely a prerequisite to an enterprise worthy solution, but if you want to really grab people’s attention in a design session, start talking about the things they won’t have to do. ARCS provides both out-of-the-box functionality as well as customizable tools that help preparers  focus on high-importance reconciliations rather than spending time on low value-add or monotonous items.

Automation occurs in two areas: outside of ARCS (e.g. data feeds) and within ARCS (e.g. auto reconciliations and rules). Setting up the former enhances the latter. Either Cloud Data Management (CDM) or Financial Data Quality Management Enterprise Edition (FDMEE) can be used to load data to ARCS, albeit in different manners, but how this is accomplished is beyond the scope of this post. This data can be sourced from a variety of general ledgers and sub ledgers/subsystems including Financials Cloud, E-Business Suite (EBS), PeopleSoft, JD Edwards, and even *gasp* Excel (…if we have to…). By automating these data feeds directly from the source, management can be confident in the validity of the data (e.g. accuracy, no manual intervention or “massaging,” live, etc.) and, with scheduling, administrators have one or more fewer task(s) to worry about. The latest application data is up-to-date by the time the office doors open. Additionally, data refreshes can occur multiple times throughout the reconciliation cycle without concern for loss of work. ARCS will only update reconciliations with differences from the last data load and will change the workflow status if data has been modified and needs to be looked at again.

Within ARCS, the “bread and butter” for gaining efficiencies in the reconciliation cycle is through utilizing the out-of-the-box auto reconciliation method property on the Profiles. This will set the conditions under which the reconciliation will automatically change the workflow status to “closed,” allowing preparers to focus on the remaining “open” reconciliations that require attention. Which conditions are available for selection depends on the Format type. Furthermore, this field can be easily updated after-the-fact. Using the Actions pane, this property can be updated to a mass of Profiles based on custom filtering.

Automation in ARCS 1

[Screenshot 10a: The “Set Attribute…” functionality from the Actions pane is a powerful tool that can be used to make mass updates from the user interface.]

 

Automation in ARCS 2

[Screenshot 10b: In this example, the “Set Attribute…” functionality can be used to make updates to the Auto Reconciliation Method property for all Profiles, selected Profiles, or Profiles that fit customized criteria.]

The “Set Attribute” functionality is a powerful tool for making changes across multiple Profiles within the ARCS user interface. In many instances, this is a preferable alternative to extracting the Profiles to a text file to modify offline. Screenshots 10a – 10b show how it can be used to update the Auto Reconciliation Method attribute specifically, but there are a plethora of other attributes that can be updated in this manner.

The last puzzle piece to the trinity of automation is customized rules. Similar to custom attributes, rules can be added in a variety of places within your reconciliations to further enhance and streamline the process for both end-users and application administrators. Attributes, formats, profiles, and even specific transaction types (ex. on Subsystem Adjustments, but not on Source System Adjustments) can contain separate sets of rules.

Automation in ARCS 3

[Screenshot 11a: Rules can be added at a Format level.]

 

Automation in ARCS 4

[Screenshot 11b: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a Format level.]

 

Automation in ARCS 5

[Screenshot 11c: Rules can be added at a specific transaction type. In this screenshot, any rules created here would only affect Subsystem Adjustments and would not affect System Adjustments.]

 

Automation in ARCS 6

[Screenshot 11d: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a specific transaction type level.]

Thus, rules can be used for anything from sweeping, application-wide changes down to differences at a transaction-by-transaction basis, as seen in Screenshots 11a – 11d. If ARCS is a suit, then rules are custom tailoring; they are made to fit your company’s specific needs.

The most common rule I see relates to Auto-Submission (as opposed to Auto Reconciliation). The out-of-the-box auto reconciliation methods previously discussed are set on Profiles and can be used to “close” a reconciliation for the period if the criteria is met. However, sometimes a reconciliation still needs reviewing such as if it is considered higher risk or only during certain periods in the fiscal year. Customized rules can dynamically determine which reconciliations can skip the preparer and be assigned directly to the reviewer, and which are clear to be automatically “closed” for the month (e.g. without approval by a preparer or reviewer). Tailoring rules in this manner still helps the preparers reduce their workload while giving management the confidence that the higher priority reconciliations are being reviewed – the best of both worlds!

No Mistakes with Modularity from “Day 1” to “Day 100”
So, there you have it: the four main manifestations of ARCS’ modularity. While nothing will replace proper planning, ARCS does not permanently punish any application decisions you (or your partner) have made in the past. The tool is able to grow with your company and accommodate your needs as they arise. There’s no reason to pick “today” or “tomorrow” – have them both.

Am I right? Am I off my rocker? You tell me! Answer in the comments below if ARCS’ (or ARM! We haven’t forgotten you…) has been able to accommodate the changes with your company’s growth.

If you like what you’ve read, please consider sharing this article through social media. And let me know in the comments what topic(s) you would like to see covered in future posts.

*Screenshots taken from the patch 1806 release.