OPA! The Future of Cloud Integration – Important Updates Are Coming

Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM.  Why do they not like that I do this?  Well there is a master data management tool for Customer data that you can guess also uses the same acronym.  While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.

I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community.  We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.

Now any good Oracle roadmap update would not be complete without a safe harbor reminder.  What you read here is based on functionality that does not yet exist.  The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.

Ok, now that we have that out of the way, let’s get into the fun stuff.  There are a number of enhancements coming and planned, but today I am going to focus on two significant ones:  performance and ground to cloud integration.

Performance Enhancements

We’re all friends here, so we can be honest with each other.  CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does.  You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.

If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM.  Essbase and Planning became targets later.  Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS.  While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal.  This dynamic has plagued users of CDM for years.  It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.

To address performance concerns, Oracle is planning to support 3 workflow methods:

  • Full – No change from legacy process
  • Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export.  This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process).  The downside of this method is that drill through is not available.
  • Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg).  This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.

Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows.  To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.

If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement.  If you need assistance, feel free to reach out to us to connect with our team of data integration experts.

On-Premise Agent

Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud.  As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions.  While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM.  This is very much a push approach to data integration.

The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.

Opa! – a common Greek emotional expression. It is frequently used during celebrations.  Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!

This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM.  This approach is similar to how the FDMEE SAP adaptor currently works.  From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.

OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS).  A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request.  This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall.  If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.

The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle.  Additionally, OPA will support connecting to on-premises relational data sources.  Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.

OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process.  This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE.  This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.

The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting.  How you might ask?  In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud.  But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code.  This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.

The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment.  I am thrilled and can’t wait to get my hands on OPA for beta testing.  I’ll share my updates once I get through testing over the next couple of months.  I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public.  Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!

EDMCS and Data Governance – Part 1

Ahh… February. An interesting month with a variety of happenings. From the significant – Black History Month and President’s Day, to the exciting – the Super Bowl…well sometimes. From the romantic -Valentine’s Day, to the silly – that tenacious ground hog trying to find his shadow…AGAIN. Not to mention that Spring is just around the corner and brings us the glorious event known as “March Madness!”

Why am I babbling about February? <segue> Because it is also the month that introduced Data Governance and Collaborative Workflows with the release of Enterprise Data Management Cloud Service (EDMCS) v19.02. <segue>

As we continue this journey to Enterprise Performance Management (EPM) Cloud, the addition of Data Governance to EDMCS is a major step forward, especially for those of us who have worked with the classic on-premise solutions (Data Relationship Management (DRM) and Data Relationship Governance (DRG)) and who have been awaiting a similar offering in EDMCS to support our Cloud clients. From what I’ve seen so far, a major gap between DRM/DRG and EDMCS has been addressed with this release.

In this blog series, I’d like to further explore Data Governance in EDMCS. At a high level, this is how I see this series unfolding:

  • Part 1 will provide the foundation, background, and basic concepts for EDMCS and Data Governance
  • Part 2 will get more into the “techy” stuff and dive deeper into Approval Policies and Security
  • Part 3 will provide a recap and closing thoughts/lessons learned

So, with that said, onto Part 1…

Prerequisites

Before diving head first into configuring Data Governance and collaborative workflows in EDMCS, there are a few things to consider.

  • Don’t forget people and process. I’m a big believer that people and process are just as (and usually much more) important as the tool. Please refer to this blog post for a quick read on this: The Data Governance Triple Crown.

I believe the same tenets apply to EDMCS and that it’s important to start thinking about a formal data governance program that includes a charter, executive sponsorship, roles & responsibilities, metrics, and much more. Data Governance can be a challenging cultural shift for many organizations which requires strong change management to handle the inevitable resistance. This is where a formal data governance framework can help.

  • Establish the foundation. As with building a house, it’s important to lay a solid foundation before you install the wiring and plumbing. Build your EDMCS application(s) and dimensions, and populate your primary and alternate hierarchies first. Get the client comfortable with the tool and the content. Then you can start to layer in the workflows.
  • Start to identify the “who” (e.g. the people involved and the roles they will play: who will be submitting requests? Who will be approving? Who will do both?
  • Start to think about the “what.” What applications/dimensions/hierarchies will be governed? What are the use cases and typical scenarios that require data governance? Start to collaboratively mock up and storyboard some typical workflows with the client to visualize how the workflows will function. And don’t try to build a workflow for every possible scenario. Start with the big hitters and low hanging fruit first. You can always add more workflows later.

What’s Included in EDMCS Workflows?

Are you wondering what EDMCS includes as far as data governance functionality? In summary, EDMCS supports:

  • Two types of roles – submitters and approvers
  • Separation of duties – workflows can be configured to prevent submitters from approving their own requests
  • The “four eyes” principle: EDMCS data governance adheres to the principle that requests must be approved by at least two people
  • Default application views and maintenance views: workflows can work with both types of views
  • Subscriptions: workflows can be triggered by Subscription requests
  • Email-based notifications
  • Serial and Parallel approvals:
    • Serial approval means a sequential order of approvals is required. For example, Approver #2 can’t approve until Approver #1 approves, Approver #3 can’t approve until Approver #2 approves, and so on.
    • Parallel approval means the approvals can occur in any order and at the same time.
    • With either method, all approvals must occur before the request is committed.
  • Configuration of Reminder and Escalation intervals
  • Multiple Workflow Stages:
    • Submit – initiate the request and add/edit/delete line items in the request. Note that with the 19.02 release, you can also attach documents and insert comments at the line item level. These enhancements are helpful to attach policies, supporting details, and other documentation related to the workflow request.
    • Approve – similar to DRG, an approver can approve, push back, or reject a request. Pushing back will send the request back to the submitter for additional changes. Rejecting will close the request and end the workflow.
    • Commit (implied) – once the request is fully approved, it is committed, hierarchies are updated, and the request history can be viewed like any other request.
  • Approval Policies – this is really the brains of how workflows are configured in EDMCS, and the next blog post cover this in greater detail. But here is a screenshot of the Approval Policy screen showing the available options:

Kevin Black - EDMCS and Data Governance - Part 1 - 3-8-19 Image 1

Conclusion

I hope you found this blog post helpful as an introduction to EDMCS and data governance, and that you will keep reading as the rest of the series is posted. Please contact me with any questions and comments!

And don’t forget to follow me on Twitter (@kblackEPM) and check out/subscribe to my blog (along with the blogs authored by my very talented colleagues at Alithya).

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 2

https://ranzal.blog/author/kblackranzal/

https://ranzal.blog/

Interested in better understanding EDMCS, the RESTful API, and Cloud Data Management? Be sure to check these excellent blog posts by Tony Scalese, aka FDM Guru: https://ranzal.blog/author/ascalese/

Looking for an outstanding resource for all things master data-related and more? Look no further!  https://datarestless.com/

Oracle Announces Removal of Support for Transport Layer Security Protocol 1.0 and 1.1; How Does that Affect Me?

Oracle has announced that as of May 3, 2019, the use of Transport Layer Security Protocols 1.0 and 1.1 will no longer be supported.  Communications to Cloud products will only be supported with TLS1.2.

The announcement was made in the following February What’s New communications from Oracle:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 1

The WHAT has come; now WHO is affected?

There are many ways to connect to the Cloud, so to better understand them, let’s break down the more popular ways of connecting and the common technology that these tools use for their connection, HTTPS:

  1. EPMAutomate
  2. Web Browsers
  3. cURL / PowerShell
  4. Financial Data Quality Management, Enterprise Edition (FDMEE)

EPMAutomate is pretty much a done deal.  If there are issues or fixes needed, Oracle will be releasing an update to go along with the Cloud deployment.  Keep an eye out on the What’s New pages as well as a notification when running EPMAutomate itself.

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 2

Both recent versions of Internet Explorer and FireFox both support TLS1.2 out-of-the-box.  It might not be enabled based on IT policies, but the functionality is present and easy to check.

Internet Explorer > Tools > Internet Options > AdvancedWayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 3

Firefox > about:config > security.tls.versionWayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 4a

Value 1 =  TLS-1.0 and a Value 4 = TLS-1.3

Now if you have ventured out into custom scripting, EPMAutomate doesn’t count in this situation, but have fully embraced REST…. then cURL and PowerShell might need some tweaks as well.  This is the start of the real reason why Oracle has started to outline and share information with the end-user community.

As a result, these solutions will need to be updated and retested.  For this purpose, Oracle has stated that you can early request, via Oracle Support, a TLS1.2-only POD for testing.  I highly recommend this, as it has provided some great insight for Alithya.  We were also able to pass along our findings to Oracle early to help stream-line the patching process of FDMEE; more on this later.

cURL scripts will need to be updated to use the ”–tlsv1.2” command when being invoked.

For PowerShell, you will need to add the following line in your scripts:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

The thing that really got me excited: FDMEE!

The last topic that Oracle mentions is if you use FDMEE on-premise.  If you are like me, an FDMEE fanatic, then you’ll know that this caused all the triggers in my brain to start firing.  All the things that I do in FDMEE will need to be tested to make sure they comply and work.  The things I use in my daily activities are:

  1. JSON based RestFUL API calls in Jython scripts
  2. Target Application Registrations to Cloud Applications

I quickly shot off an Oracle Support ticket to get myself a TLS1.2 POD.  Oracle responded in relatively short time and stated that my POD was ready, and I had it for roughly two weeks for testing.  Without any changes to my virtual-lab, I attempted to connect to see what happens.  Sure enough, I received an error:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 5

I also imported an LCM of a previous Cloud application to get around this error to see what a set of custom Jython scripts with JSON/RestFUL API would produce and received similar errors:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 6

…as well as the out-of-the-box Refresh Metadata & Refresh Members options:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 7

I confirmed with my colleagues in Development that this was the expected result when TLS is not at the right levels and all the appropriate patches are set up and configured.  Knowing this, I also tested with my browser option disabled and received the same result.  So now that I know I have a good starting point, I was off to the races to figure out how to continue.

Unfortunately, the links that Oracle provided in the What’s New announcement appear to be broken and not public.  As a result, I had to create an SR to gain access to the information.  After I received them and did some light reading, I was able to formulate a patch strategy, apply the necessary patches, apply the registry updates, and test again.

This time I was able to run successful tests of both FDMEE scripts and Oracle adaptor connections to the Cloud.

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 8

Great… Now what do we do?

Patching the environments was not always an easy task.  It took quite a bit of time to complete as there were multiple products that needed updates.  Most of them weren’t standard EPM (HFM, Planning, etc.) products that needed updating:  WebLogic, JRockit, JDK, OHS, etc. all needed to be updated, but because these are the building blocks on which the EPM suite runs, they caused update dependencies into the EPM products we used.

Oracle has stated that this is going into effect on May 3rd which is right around the corner.  Alithya, an Oracle Platinum Partner, is here to help you assess your current EPM installation and build that patch plan.

Even if you don’t use the Cloud today but are thinking about moving to the Cloud at some point, it is important to make sure your environment is ready and that you have the necessary support.

For more information, contact us at infosolutions@alithya.com.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Analysis Views and Scatter Analysis

PCMCS Out-of-the-Box (OOTB) Features:  2. Intelligence and Dashboarding – Analysis Views and Scatter Analysis

Two teams of consultants with similar amounts of experience and prestige guarantee that they can perform an application implementation to the highest quality: one at a higher cost, but shorter timeframe; and the other at a lower cost, but in a longer timeframe?  All other considerations being equal, should I save money, or should I save time?

A few days ago, I released my first blog post on PCMCS, covering Rule Balancing reports usage and customization. This post builds on that first post to cover intelligence capabilities, some of which are only available in the Cloud version of the PCM software.

There are 6 menu options when accessing the Intelligence menu within PCMCS.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

This post covers the first two menu options to explain how to set up Analysis Views and how to use Scatter Analysis.

Analysis Views

Analysis Views are the first set of reports available to end users within the PCMCS user interface.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 7

These views represent a way to predefine and save intersections of members for future review.  The selections within Analysis Views are open to all dimensions within the PCMCS application at various levels within the hierarchies. This is the first step you need to take towards building or defining a dashboard for your PCMCS application.

If you cannot create or edit an analysis view, then you need to reach out to your PCMCS administrator in order to review and adjust your security settings.

The example Analysis Views for this post are based on the “Demo Bikes” application that can be deployed with a few clicks in your PCMCS instance BksML30.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 8

A data slice is a combination of rows and columns along with the page selection, which, in this case, is the Period dimension.

Any dimension that is not specified in any of the 3 areas (row, column, page) will be read at top level and will be displayed in the settings menu.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 9

The Add Filter section allows you to filter the columns based on specific numerical values. In this case, the columns are represented by the Product dimension selections.

To create an analysis view, click on the plus (+) sign on the main menu. The three tabs displayed will allow you to define a name and description as well as the setup for row and column dimensions. You cannot select more than a dimension for either rows or columns.

Within the Row dimension selection, you can leverage different formulas applicable to the hierarchies within PCM such as Children of member, Member and children, Level 0 descendants, etc.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 10

Columns do not have options for member formulas beyond the usage of User preferences.

The row dimension will allow you to display further information such as generation or level details. For example, for the Product dimension, we can display the generation 3 and 4 information alongside the level 0 members, allowing us to expand our analysis to different product categories, or types.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 11

Selecting new members within the Analysis Views will not impact the original data definition. If you choose to display data for any month other than the one that was setup and saved in the Analysis view, you can do so because the Page parameter is open to end user modifications. If; however, you want to update and store a selection change within the analysis view, you must perform such update via the Edit menu instead of simply selecting a new parameter on the screen in view mode.

You may need to utilize the concept of period ranges when using Analysis Views in order to dynamically reference specific members of your Period dimension.

Defining a current period for the application is mandatory in order to be able to create formulas dependent on time. This action is available via the Application menu by selecting the Edit application option and navigating to the tab called Dimension settings. Here is where you can define the current Period and the Current Year for your PCMCS application.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 13

These settings will be applied when using the “Single…” or “Current” selection options within Analysis Views. Single (-1) Level 0 selection represents, in this case, the month of May, since the current Period selection for the PCMCS application is June. The Single (-1) Level 1 selections return Q1, since June is in Q2.

Scatter Analysis

Scatter Analysis graphs will compare one member’s values against another member’s values. The two members selected must be within the same dimension. Your PCMCS Demo application may not have any sample Scatter Analysis graphs. However, you can create one by leveraging the Analysis Views at your disposal.

You can launch Analysis Views from within Scatter graphs.

Note that saved Scatter Analysis cannot be reused or referenced in dashboards. You should use this section to create graphs for ad-hoc use outside of the dashboarding capability.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 14

If you need to include Scatter Analysis within your dashboards, you will have a corresponding menu item that allows you to create dashboards within the list of available items.

You can select an existing Analysis view, but you must reselect your X-axis and Y-axis dimension references.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 15

Conclusion:  PCMCS Intelligence – Analysis Views and Scatter Analysis

While there are many alternative reporting solutions to use in conjunction with PCMCS applications, assuming that both time and money are of essence in any project implementation, it is safe to conclude that using the PCMCS OOTB reporting features would be cost effective as well as efficient. The Intelligence screens shared in this post are included in the PCMCS subscription cost, and any end user of a PCMCS application with the right level of access can take charge and build the desired reports, saving end users in a location accessible to their peers while spending no time in iterations of reporting requirements and data validations.

The PCMCS OOTB reporting features support not only troubleshooting, but also detailed analysis and reporting within one screen.  Such capabilities should not be ignored as they will surely add meaningful insight into finance teams’ day-to-day use of PCMCS.

If you need advice and guidance on how to leverage the PCMCS reporting capabilities for existing or future applications, reach out to our team of PCMCS experts at infosolutions@alithya.com.

The remaining intelligence menus will be covered in subsequent posts over the next few weeks. If you are interested in receiving notifications of such posts, subscribe to notifications.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Rule Balancing Reports

PCMCS Out-of-the-Box (OOTB) Features:  1. Rule Balancing Reports

The other day, I was thinking about the times I used to study Finance, and specifically about a course regarding Interest and how it represents the value of Time. What is the cost, or value, of one’s time? – is it high, resulting in a higher interest rate per period, or is it low, resulting in a low interest rate per period? How much time am I willing to spend working in order to get that new car? How much time do I have before that competitor will outrun me and snatch that market share from me?

This was how I started thinking about various out-of-the-box features (OOTB). Such features are often key in deciding whether to acquire a software/service/product because the one resource that we constantly complain about not having enough of is “time.”

You are now reading the first blog post on OOTB features in PCMCS covering one of the most used Reports for data analysis as well as troubleshooting profitability calculation results. At the end of this blog post, you should know what Balancing reports are, where to find them, how to use them, and also how to further expand them with minimal time and effort invested.

What are Rule Balancing Reports?

Rule Balancing reports provide quick insight into the validity of the application results. These reports are powerful OOTB artifacts that can be further configured to cater to any custom application requirements in order to support validation of calculation results as well as contribution analysis and traceability.

The PCMCS OOTB Rule balancing report is initially based on a Default Model View with a standard selection of upper level members for each dimension. Starting from this Default Model View, the administrators or users of the PCMCS applications can perform a deep dive analysis on more granular intersections and configure detailed reports for a ruleset or a group of rulesets they choose to investigate.

The Default Rule Balancing report is available as soon as the application has been deployed, and it can be accessed via the Main Navigator menu found under the Manage section.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 1I will be using the default BikesML30 application to demonstrate the capabilities of the Rule Balancing reports. If you have loaded your sample application and cannot see any results in the Rule Balancing reports, check that you ran your end-to-end calculations for any given POV from the Manage Calculation Menu. The POV I have chosen for this demonstration is FY16, January, Actual Scenario.

As you open the Rule Balancing menu, the Default Model View is the only view available when you initially set up your application and your allocation rules. Any other Rule Validation reports that you see within the Demo application besides the Default Model View have been built and configured outside of the out-of-the-box list of features.

What are PCMCS Model Views?

A Model View represents a predefined data slice within the PCMCS application; consider the model views as a set of selections of members for each dimension that displays only the relevant data points for a required intersection.

Rule Balancing Report Example

After running the entire set of allocation rules within the Demo BksML30 application, the Rule Balancing report should look like this:

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 2

The description of each rule selected will be displayed along with the rule number. The rules will be displayed in the order that they were launched following the user-defined sequencing, regardless of the actual Rule Number/Rule ID that has been assigned.

  • The “Input” column enables users to confirm that what was loaded into the application matches the expected values received from the source system.
  • The Allocation In and Allocation Out columns validate the allocations performed by the application from both a balance perspective (Allocation In should be equal and opposite to Allocation Out) and a numeric one.  The balance aspect is particularly of interest when allocations are executed with custom calculation rules.  In these cases, two separate rules are typically required, one for the “credit out” and one for the “debit in.”  As such, there is a greater risk that the formulas for the outbound and inbound values will not produce amounts equal and opposite in total, thereby causing an undesired imbalance.  In these situations, the Allocation In and Allocation Out values are shown on two separate rows, and they quickly illustrate to the user the success of their calculations.

Rule Balancing and Smart View Ad Hoc Reports

Any highlighted data point/data value in the Balancing screen will allow you to further investigate the allocation step through a Smart View ad hoc report. These hyperlinks represent pre-built/pre-defined queries that point directly to the Essbase database, allowing you to further expand the analysis of a selected data point.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 3

When you click on the highlighted number, a Smart View link will be downloaded to your workstation.

As an example, you can see how the detail for Net Change looks like for the Custom calculation rule R0001 – Utilities Expense Adjustment in a Linked report in Smart View.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 4

The column headers for the Rule Balancing report will list the relevant Balance dimension members. If there are members that are not populated, these will be automatically filtered out of the view. You can choose to display them by selecting View -> Columns and tagging the members you would like to display on your report – whether they have data or not.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 5

For further information on what each of these Balance dimension members represent, check out my blog post on Demystifying the Balance dimension in PCMCS.

You can view and edit the model view definition in the collapsed area between the POV and the Balancing report.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 6

The Input data on this customized Model View is pertinent only to Operating Expenses rather than the entire pool of data. This is the reason that the total USD value may be different from data displayed on the Default Model View report.

You can perform ad hoc edits to the Model View as you are using it, but none of the newly made selections will be stored. If you want to apply permanent changes to a specific Model View selection, you will have to edit the Model View in the corresponding menu.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 7

Your Model Views can be defined in the same order of operations as your allocations, or you can choose to create Model Views that are more detailed and dive deeper into a custom grouping of rules, regardless of the ruleset to which they might belong. The only dimensions displayed in your Model View selection are the Business dimensions. POV, Balance, Rule, and Attribute dimensions are not represented and therefore are not open for selection. The data points you define in the Model view will apply to all relevant rules IDs that generated the new cells.

Enhancing and Customizing Your Rule Balancing Reports

In the Demo BikesML30 application, there are several standard Rule Balancing reports that are split by Ruleset while others are named “Trace.” The Trace Model views are built in order to support point troubleshooting of allocation areas that are either complex or open to high variation during each run.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 8

If you want to use the Rule Balancing report values outside of the ad hoc capacity, you can export the report into XLS, but remember that such an export will not represent a Smart View report – it will simply be a listing of the information presented on the Rule Balancing screen, as some members displayed here do not have a direct equivalent in the application (Running Remainder, Running Balance). This export option can be found in the Actions menu, export to Excel, or by selecting the button in the below screen capture.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 10

A new workbook is downloaded called RuleBalance, and the entire set of data displayed on your screen will be available in XLS.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 9

PCMCS Rule Balancing Drawbacks

Rule Balancing does not allow filtering based on Attributes, UDAs, or Names.

Rule Balancing hyperlinks open SmartView tabs called Linked View, and any new selections of links within the Rule Balancing report will overwrite the contents of the existing tab. If you start developing a report by using Rule Balancing, remember to always rename the tab in case you want to kick off another report for a secondary data point within the same workbook.

Common Issues When Using Rule Balancing Reports

“Rule Balancing Report Links Don’t Work”

Your workstation must have Smart View installed before using the hyperlink feature within PCMCS. The latest Smart View version is available for download through the Navigator main menu under the Installations section.  For more guidance on generic EPM product patching, read the blog post Patch Today! Don’t Delay!

When selecting a hyperlink in the Rule Balancing report, you should be able to see that a download has started. As you click on the downloaded content, a new Excel tab will open, and you will be prompted to enter your Cloud credentials in order to have access to the requested data point intersections. If you do not have Excel open at the time you are accessing the downloaded content, the prompt to enter your Cloud credentials may not appear on the screen.

“I Can’t See Any Data in the PCMCS Rule Balancing Report.”

If data is not displayed on the screen, you are looking at one of the following situations:

  1. There is no data loaded and/or calculated for the POV at the intersections you have defined in the Rule Balancing report. Check your job console to see if such tasks have been triggered and completed successfully.
  2. Your security setup is restricting you from seeing any data values. Reach out to your administrator to adjust data grants or application access.
  3. (This used to happen occasionally during on-premise implementations) If your Business dimensions are tagged as Label Only, check that the first child contains values. You may be able to see data at base level intersections within your application, yet the Rule Balancing report shows no vales due to the Dimension Type, Member Storage, or Aggregation operators you have defined in the metadata.

“I Can’t Create a PCMCS Model View.”

This restriction is based on provisioning. Reach out to your PCMCS Administrator for assistance with your profile or settings.

Rule Balancing Wrap Up

Rule Balancing reports are easy to set up and use.  They retrieve data quickly, are accessible to all application users through the same menu, and they should be the first stop during a model run to quickly identify if there were any issues with data allocations.

Because Rule Balancing is a fast reporting tool with a predefined template OOTB, it is one of the commonly used troubleshooting reports for PCMCS, which can be leveraged for quick balance checks. It is also a mechanism for quick report building at detailed Rule level, a faster alternative to reading the Rule definition and manually replicating the intersections in a Smart View report.   Because these reports are system generated and their hyperlinks are based on application and rules set-up, there is no room for manual errors when building validations.

Save precious time by leveraging the PCMCS OOTB functionality. The next post in this series covers Intelligence screens – Analysis Views and Scatter Analysis.  If you have further questions on the usage of Balancing Reports within PCMCS, please reach out to our team of PCMCS experts at infosolutions@alithya.com.

Implementing Zero Based Budgeting: Setting Up Your Environment

The previous post – Implementing Zero-Based Budgeting: The Requirements – outlined two key components of a successful zero-based budgeting program:  a culture change and a centralized system. We recommended creating a centralized system with Oracle Planning and Budgeting Cloud Service (PBCS)/Enterprise Plainning and Budgeting Cloud Service (EPBCS) because of the many advantages it provides such as an environment with data depth.

Even with a zero-based budgeting blueprint, many companies are still hesitant to go “all in” thinking that a zero-based budgeting program implementation requires too much time and resources. The introduction of Cloud services such as Oracle PBCS/EPBCS makes the implementation of a centralized financial system easier than ever, greatly reducing the barrier to entry.

This final post in this series shares the power of a PBCS/EPBCS environment to achieve the greatest success with a newly implemented zero-based budgeting program.

How Can PBCS/EPBCS Environments Enhance the ZBB Experience?

There are four key ways to gain the most from a PBCS or EPBCS environment, including the setup of targets and accountability metrics that offer more meaningful data and greater transparency when making budgeting decisions.

Clients are often given target settings goals in management meetings or over the phone, but we demonstrate for them how to integrate this into their budgeting systems. On numerous occasions, Alithya has been contracted to implement target settings where leadership sets growth targets and the systems flows down the revenue by service, product line, etc. In turn, analysts match the underlying details.

Not surprisingly, this is a common request because target setting has been a long-time tradition during the budget process. By setting up this target setting process in PBCS\EPBCS, an off-line process is instead online and is molded with the overall budgeting system process.  Combining that with the zero-based budgeting mantra allows targets to be set and provides analysts with their needed baseline.  Moreover, analysis can be done on departments that take the typical “reduce expenses by 10% approach” to archive the target number instead of the more insightful zero-based budget journey.  Yes, target setting in a centralized system is easier, but the benefit of a centralized system is the ability to see how teams react to the new target.  Did they take the traditional “reduce budget percentages to fit the numbers,” or did they look at their budget as a whole and analyze each line item and question the numbers organically?

After targets are set and the budget is approved, we look at the said cost saving come to fruition.  A centralized system allows capital projects or initiatives to be tracked to help systematically measure the expenditures of cost savings activities found during the zero-based budget discovery. This provides a clear picture of what each department is doing and holds them more accountable for project decisions. It is an achievement to complete a zero-based budget “diet,” but holding teams accountable brings them to the next level of the zero-based budget “lifestyle.”

In essence, this new budgeting environment provides better insight into data – insight that ultimately allows savings to be found more effectively. For example, if you want to see the cost of direct materials, this centralized system can be set up to capture the costs in order to analyze and keep track of the different KPIs that reduce or increase overall costs.

Another example of how this works is by segmenting down employee costs such as travel. Instead of having a run rate of 10% of direct labor or travel costs, determine what job or tasks required that travel and use this KPI to negotiate travel expenses to further drive down costs.  Essentially, use PBCS/EPBCS as a tool to capture KPIs (e.g. travel costs by job) and determine the best use of travel dollars and – more importantly – negotiate with vendors on key travel.

Lastly, a budgeting environment provides clarity to help teams make better informed decisions about future initiatives. With the ability to see all of the underlying data points in a single location, it is possible to identify past sales and marketing campaigns and expenditures that led to profitable customers. Therefore, zero-based budgeting teams that took the initiative to determine the best sales and marketing costs to benefit analysis from the ground up are able to dedicate more resources (e.g. dollars, people, etc.) to winning strategies.  This is in contrast to the traditional budgeting approach of “10% rate of marketing spend year-of-year” that often masks the winning and more importantly losing marketing initiatives. Moreover, such planning and availability of different data points helps draw key inferences that allow sales and marketing teams to be more successful.

Summary 

Utilizing a Cloud service such as Oracle PBCS/EPBCS makes it easier for companies to implement a centralized system and achieve success with a zero-based budgeting program. PBCS/EPBCS environments can and should be set up in a way that enhances the zero-based budgeting experience. This is achieved by integrating target setting goals and establishing accountability metrics that allow a deeper dive into budget data while providing greater transparency to make better informed decisions.

To learn more about zero-based budgeting best practices and to get professional help with your Oracle PBCS/EPBCS environments, feel free to contact our team of experts.

Oracle’s ARCS Patch 1812 and Patch 1811 Review: Gazing into the Crystal Ball

Peruse the Account Reconciliation Cloud Service (ARCS) forums on Oracle’s Cloud Customer Connect and you’ll notice a theme: Transaction Matching. Questions, comments, and critiques have been flooding in from across companies and industries, clients and consultants alike. Combine this with Oracle’s game-changing announcement of the EPM Cloud price simplification plan teased for 2019 – that is, the strategic move to strictly sell bundled EPM Cloud products in the near future (more on this another time – it’s a doozy) – the changes released for ARCS in Patches 1811 and 1812 could not have come at a more opportune time. Furthermore, these changes provide a sneak peek into Oracle’s crystal ball of what’s to come.

The WHAT has come: Changes for ARCS at the end of 2018

The most important change to ARCS from the 2018 season finale, Patch 1812, is the shift from having separate reconciliations between Transaction Matching and Reconciliation Compliance to one standardized use of Profiles. This is configured through the new reconciliation methods provided in Formats (Balance Comparison with Transaction Matching, Account Analysis with Transaction Matching, and Transaction Matching only).

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 1

The implication is that Transaction Matching reconciliations receive all the benefits that previously only Reconciliation Compliance enjoyed, including but not limited to: bulk uploads/updates to Profiles and reconciliations, access to new Workflow options such as Reviewers and Teams, and detailed filtering options including the more hidden statistical metrics (such as attributes related to count, etc.). It is important to note, though, that these new features will almost exclusively relate to new reconciliations using one of the two ‘*with Transaction Matching’ format options, as seen below. Still, the opportunity for clever design is there.

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 2

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 3

Furthermore, to support this change, Period will now be shared between the two feature sets. Additionally, reconciliations that are performed in Transaction Matching will now utilize their period-end Balances loaded to Reconciliation Compliance. While historically there have been business processes put in place to ensure that the balance loaded to Transaction Matching equaled the balance loaded for the month-end reconciliation in Reconciliation Compliance, patch 1812 ensures that a system process governs the data’s integrity – certainly a more reassuring thought.

Two additional under-the-radar features introduced in Patch 1812 are (1) the ability to have Workflow that includes multiple members while not requiring an order precedence to the work and (2) the option to now have end-users approve their own re-assignments, reducing the administrative bottleneck. These changes provide value-add functionality that demonstrate Oracle’s willingness to listen to customer feedback even during these more “stuffed” patches.

The last item to mention was actually included in Patch 1811. In Transaction Matching, a text file can now be generated with the transactions or adjustments from the tool which can then be uploaded to the ERP source systems as a journal adjustment. This has been an ongoing request, and I am happy to see it finally actualized.

The WHAT does it mean: Implications and Expectations for ARCS in 2019

Transaction Matching’s relative strength to its competitors is becoming increasingly apparent, as Oracle continues to sure up areas in need of support while also providing updates that show a sensitivity to market demand. The move to unify Transaction Matching and Reconciliation Compliance is not a new idea, as Patch 1805 made apparent with the uniting of the two UIs (and much more – see Oracle Product Management’s webinar update here), but nonetheless is a bold one that I anticipate will pay dividends. The automatic conversion of Transaction Matching reconciliations to Profiles is a nice touch too, making the transition an easier pill to swallow for skeptical clients who I am sure were not eager to pay expensive consulting fees for this. Even smaller changes such as providing a space for strictly manual matching (i.e. without Auto Match rules; Patch 1811 change) demonstrate ARCS’ commitment to be an approachable and modular product that grows with your company – a benefit I have consistently touted in the past, and I expect to continue to do so in the future.  More details about the benefits of ARCS are shared in the posts A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS) and Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100”

Changes continue to come to ARCS that only slowly trickle, if at all, down to Account Reconciliation Manager (ARM). This was true for the Variance Analysis reconciliation method which arrived in May 2017 for ARCS, but not until Dec 2017 for ARM, and it is a fair guess that this will be true for the aforementioned “All Preparers” and “All Reviewers” workflow options and end-user re-assignment configuration setting. Combine this with more and more dollars being invested in Transaction Matching compared to Reconciliation Compliance (from where I’m looking, anyway), and the message is clear on who the favorite is in the Oracle product family. While ARM contains strong functionality as an on-premise option, expect the functionality gap to increase compared to its Cloud counterpart.

Lastly, the inclusion of a journal adjustment export out of Transaction Matching is a combo solution: a “we can do that too” to product competitor Blackline’s existing functionality as well as a demonstration of Oracle’s willingness to think outside of the product. This highlights ARCS’ flexibility as a tool capable of being used within other processes. In fact, the Oracle EPM Cloud ecosystem is one of ARCS’ biggest strengths over its competitors.  I would love to see this journaling ability out of Reconciliation Compliance as well which would provide the functionality to most ARCS clients. Regardless, this is a step in the right direction.

This post has been cross-posted on the #DataRestless blog site – read it here and other Oracle-related posts as well.

Implementing Zero-Based Budgeting: The Requirements

A Culture Change and a Centralized System

The first post in this 3-post series – Implementing Zero-Based Budgeting: Benefits, Myths, and Goals – covers the benefits of zero-based budgeting. To summarize, it enables you to achieve long-term savings that result in sustainable growth and holds your financial analysts accountable for the cost figures they approve and how they are managing the overall budget. This allows more effective recognition of any unwanted costs and how you that money can be shifted into other growth areas within the company.

However, to reap the benefits of a zero-based budgeting program, a culture change is needed first at certain levels within the company. The goal is to eventually have the entire company complete this culture shift, but it is best to start small. Along with a change in culture, a centralized reporting system needs to be created as well to provide teams the ability to share real-time numbers with each other to achieve the goals of this new budgeting program.

Better Than a Quick Fix

What exactly is meant by a culture change? This means starting small and fostering this culture change in other departments starting with Finance. To be successful with this new program, other departments will eventually have to jump on board with this new budgeting approach. These departments will need to step up in analyzing their own costs and how they can save more without diminishing their capabilities.

For example, while financial analysts talk to the shop floor to see where costs can be reduced, the HR department should work with Finance to determine how it can become leaner. Moreover, the IT department should take the lead on negotiating with its vendors to find any areas that can be saved. These are just a few examples of how different departments can step up to the plate; implementing a successful zero-based budgeting program will requires team effort.

Changing the culture doesn’t happen overnight. Senior leaders should take the lead in fostering this change. To ensure that everyone is on the same page, managers need advocate the new approach within their respective departments.

Incentives also help teams to buy into this new budgeting approach.  Although incentives for growth metrics may already exist, additional incentives can effectively encourage staff to find ways to reduce costs for the metrics they manage.

Some examples of incentive metrics are the realized ROI based on the requested capital expenditure and the total cost saving dollars resulting from a zero-based budgeting program. For the former, this can mean moving to the Cloud to save money or reducing redundant tasks by introducing centralized software. For the latter, it can be exemplified by achieving a 10% cost reduction per phone.

Best Practice to Achieve Success

A crucial component of the success of a zero-based budgeting program is an officer who governs the entire process from start to finish. This individual (or team) should contain deep knowledge of the budgeting process. Naturally, s/he will not know the ins and outs of each department, so that is why s/he needs to be an ambassador to department leaders. The officer will also provide oversight to ensure that past bad habits of budgeting do not return to plague this new program. And lastly, s/he must be dedicated to the craft of continuous improvement which means seeking outside counsel when needed.

As mentioned earlier in the post, a culture change needs to be accompanied by a centralized reporting system. Alithya has helped clients implement Oracle Planning and Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS) and overcome the deficiencies of Excel-based models. These models lose sight of what the true cost numbers are because past budgets are simple anchors of history rather than detailed breakdowns of cost. Moreover, these numbers become siloed within the vast library of Excel models. With Oracle PBCS or EPBCS, budgets can be highly surgical and help leaders in the company pinpoint reductions.

A centralized system allows the capture of all changes in a single location in real-time, and it provides insight into how effectively managers seek cost savings. This can be used as a key indicator to determine if their actions are in line with this new methodology.

Furthermore, centralization not only holds managers more accountable, but it also empowers them to create innovative cost-saving solutions. Driven by incentives, staff will burn with a clear purpose to find new ways to achieve sustainable growth for the company and be rewarded for hard work.

Recapping What It Takes to Achieve ZBB Success

The goal is to create a cost savings culture that allows more capital to be invested into growing parts of the company. To be successful, follow the best practices outlined, starting with a culture change within the company and giving your teams a centralized PBCS and EPBCS system to more clearly see all data points. The hard work does not stop here, though! The next post delves into setting up a zero-based budgeting system.

Implementing Zero-Based Budgeting: Benefits, Myths, and Goals

If you are in the finance world, then you probably have heard of zero-based budgeting. Investopedia defines zero-based budgeting as “a method of budgeting in which all expenses must be justified for each new period. The process…starts from a “zero base,” and every function within an organization is analyzed for its needs and costs.”

There are many reasons that financial professionals decide to use zero-based budgeting. For one thing, it goes hand-in-hand with a centralized system where information can be shared – something at which Excel spreadsheets are terrible. Furthermore, developing a centralized system enables you to scale to your needs as your company grows. Lastly, it enables financial analysts to spend more of their work week analyzing data instead of curating a financial system and worrying if the numbers match.

At Alithya, we have found with our past clients that a successful zero-based budgeting implementation resolves numerous problems. The two main things clients hope to achieve is growth across multiple business units and developing sustained cost reduction. With zero-based budgeting, you can earn long-term savings that can directly translate into sustainable growth.

Earning Long-Term Cost Savings

Zero-based budgeting becomes a daily exercise in cost savings for your financial teams. One method in achieving cost savings is renegotiating costs. For example, instead of taking the run-rate of 3% from last year’s numbers, perhaps you can contact your vendors to bargain for a better deal or switch to a different vendor with a more competitive price. Or how about having your analysts ask the IT department why it costs $38.03 per phone? What makes up that entire $38.08? Don’t assume that there aren’t any negotiable components of a cost.

The reason zero-based budgeting is so effective at long-term savings is that it is not a one-off fix. Many teams tend to implement one-off fixes, and then find that those fixes do not provide sustainable cost savings. A common example is offshoring your call center which might get you an immediate win in the cost column. However, this strategy typically reduces customer service quality while also limiting your ability to evolve with your business as it grows.

When enacting this type of program, you will analyze the costs of your business at every level. This may seem tedious, but what you will find is a clearer understanding of where your money is going. This can mean acquiring a greater understanding of contract labor costs as well as improving purchasing and procurement procedures, just to name a few. Moreover, when properly implemented, zero-based budgeting can reduce SG&A costs by 10 to 25 percent, often within as little as six months,” according to McKinsey & Company.

Debunking Myths Surrounding Zero-Based Budgeting

There are many myths surrounding zero-based budgeting that have sadly created an artificial barrier that CFOs and their teams do not want to cross. Many financial professionals think that it means cutting the budget down to the bare bones, but rather, a zero-based budgeting program analyzes costs from the top-down. Moreover, it is the CFOs’ duty to outline cost-cut targets so that their team’s efforts are focused.

Another misconception is that zero-based budgeting only helps with cutting the costs of SG&A. Actually, it can do much more, such as breaking down the Cost of Goods Sold (COGS) and help teams make investment choices on the capital expenditure with the greatest ROI.

Just because your business is not in decline or stagnating doesn’t mean that you can’t adopt a zero-based budgeting program. If you are already achieving growth, you can use this type of budgeting method to keep the overall business leaner so that you can provide more runway for growing business units.

Do you really start from zero? This is a common question that we are asked, and many people think because of its name that you do always start from zero. Technically, this is true, but this is the core component that drives the cost management culture change that will be introduced in the next post in this series.

However, not all things have to start from zero. At Alithya, we have been through many implementations where parts of the P&L are driver-based or zero-based. This can be achieved with a detailed, structured, and interactive system (like Oracle PBCS/EPBCS) that gives you real-time feedback.

How Does Oracle PBCS and EPBCS Help Achieve ZBB goals?

The main feature you acquire when you implement an Oracle PBCS or EPBCS system with your zero-based budgeting program is deeper analytics. This data enables you to dig into the “why and how” of your P&L.

For example, you could pose the question what driver did they use? Did they just simply take last year’s actuals and add 3%? Did they take a cost-per-head and budget it manually, or did they take the easy way out? All are important questions that force finance teams to be more accountable when it comes to everyday decisions.

Recapping the Benefits of ZBB

By implementing a zero-based budgeting program with a centralized system, you can hold your analysts more accountable to cost figures while making them own up to how the costs are managed. It allows you to recognize any unwanted costs that can be diverted into certain growth areas as well as breed a culture of cost reduction and visibility. The latter requires that you to start a culture change within your team. It is an essential part of having success with a zero-based budgeting program which is why we will cover it in greater detail in the next post.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.