Profitability and Cost Management v19.08: Updates, Insight, & Impact

Oracle Cloud Service subscribers are used to monthly updates being applied to their Test and Production instances on the first and, respectively, third Friday of each month 

However, not all monthly updates are created equal. 

If you own a subscription to Profitability and Cost Management Cloud, you may have noticed a brand new menu with the latest update that occurred this past Friday, August 2ndMore importantly, the updates in the latest version of PCM are not only on the surface.  

Here is a list of all the announced updates as well as some other insightful findings and their potential impact: 

  1. Custom Calculations Bug fixed
  2. New Model Menu
  3. Designer menu – 2 in 1!
  4. Increased transparency during rule build
  5. Integrated POV Manager
  6. Increased flexibility with Model and Data POV
  7. Launch multiple POV allocations with one click
  8. Embedded search capability in the Execution Control menu
  9. Easy access to Job Library
  10. Recreate instance with all file clear

1.  Custom Calculations Bug Fixed

If you were holding off on applying PCM patches because of reasons pertaining to Custom Calculations issues in prior updates, then this is the update you have been waiting for!  Early testing of the updates in PCM v19.08 indicates that bugs found in past versions in relation to complex custom scripting have been solved.

*Caution:  Each Custom Calculation is unique, and thorough testing is crucial before the scheduled v19.08 update is pushed to the Production instance. 

All software updates should be tested using a data set that can be easily compared with results from a prior version of the software.

Due to several changes to the Essbase database, users may notice differences in the reported values in Execution statistics reports, but the resulting data values should be the same as they were before the latest update was applied. When Test instance calculations results indicate that the cells updated differ compared to Production results for the same rule, users should take it one step further and validate the data values.

2.  New Model Menu

Rule build and maintenance tasks can now be performed in the new “Models” menu.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 1

The new Models menu is aimed at simplifying the way we manage Profitability and Cost Management applications Model data – Rulesets and Rules. All administrative tasks are grouped for a more streamlined interaction with PCM – from building rules to executing them and, finally, monitoring jobs – a simplified menu that reflects a real-life workflow. This GUI update will enhance the user experience as there is no more need to jump between different sections of the menu in order to perform end to end activities.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 2

3.  Designer Menu:  2 in 1!

Within the Designer tile there are two tabs covering functionality that were previously accessed via two separate menus:

  • the pre 19.08 Rules menu –called Waterfall Setup in the Designer menu
  • the pre 19.08 Calculation Express Editor menu – called Mass Edit in the Designer menu

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 3

Combining the previous two menus is a beneficial move as it groups logical actions within one location.

Users can set up new rules in the Waterfall Setup tab and can perform mass changes such as replace or add selections in multiple rules in the Mass Edit tab.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 4

4.1  Increased Transparency During Rule Build

Prior to the 19.08 update, in order to check who was the author of a rule, date and time the rule was created as well as last updates, users would have to exit the Rules menu and use the information in the Calculation Rules menu instead. That is no longer the case in the new 19.08 Designer menu. Every rule now displays all this information immediately as it is selected.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 5

There are also updates to the GUI for the two types of rules in PCM.

4.2.  Standard Allocation Rules

The pre-19.08 Management Ledger Rule editor menu for standard allocations is straightforward and easy to use.

There is a tab for each section, indicating from left to right the proper steps for setting up your model allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 6

Who would have thought it could get better?

Well, in v19.08 it did.

The new display combines the Source and Destination tabs into one. This adds to the ease of use as well as transparency of rule setup.  In one look users can now check the core setup of each rule.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 7

The […] button on the right side of each row/ dimension selection has 3 menu options:

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 8Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 9

  • Users can type in multiple member source selections. Member name validation is not dynamic; it is performed only when saving contents. The menu does not lend itself to bulk copy and paste from a text editor – each record will have to be copied on its unique row.
  • Calculation segmentation – this was a feature that was present in prior versions of PCM. The advice here is to utilize this feature only when requested to do so by Oracle Support, when dealing with large applications. As per the Oracle Admin Guide for PCM, “it activates a way of calculating specific dimensions and levels to enhance scalability with very large models”.
  • Clear [Dimension] Selections – the previous menu’s “X” button (see below) would enable users to remove member selections, one at a time.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 10

The clear selections menu option with the latest PCM release enables users to remove all selections with one click.

4.3.  Custom Calculations

Custom calculations have a similar enhancement as the one in standard allocation rules. The Rule menu, available with pre 19.08 versions of PCM, had two separate tabs, one for the Target set of members and a second one for the custom formula:

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 11

In the 19.08 version, the Target and Formula are now collapsed into one  menu.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 12

The new displays for both standard allocation rules as well as custom calculations add clarity and ease of use at the same time, potentially decreasing development and troubleshooting time.

4.4.  Wishlist for Future Designer Menu

If I could choose one feature from the pre 19.08 Rules menu that we could layer in on top of the 19.08 Designer menu, it would be the Text Editor.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 13

A simple feature that is used heavily not just during development, but also during troubleshooting and maintenance.  This feature is, unfortunately, not available in the On-Premises version and it is not part of the Designer menu screen either.

The benefit of this feature is that users can copy information from a text editor or .xls and apply mass updates to a rule for all the dimensions in the Source/Destination/Target screens in one action.

The format of the Text entry is restricted, as each dimension member selection must be typed on a separate line and must include a reference of the dimension that it belongs to.

“Dimension Name”, ”Member Name”

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 14

The format restrictions have not impacted the amount of times users prefer Text Entry to any other type of rule editing menu, especially when they are familiar with the model and the metadata naming conventions.

One other feature that I found useful especially during development or even during initiatives involving structure reorgs, is the selection panel that displays the entire hierarchy for each dimension.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 15

This panel does not appear to exist in the new Designer menu, which means that the user must be familiar with member names or must know all the layers of a dimension. That type of familiarity with metadata comes with time and, as things often change, the user may be constrained to stay constantly up to date with every new modification, as it pertains to updates to the allocation model.

The member selection search box that appears in 19.08 is an improvement, as it is fast and dynamic. However, when multiple selections are required and there is not much familiarity with the hierarchy naming convention, having the entire picture available in one panel is beneficial.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 16

5.  Integrated POV Manager

The POV Manager has been collapsed under the Execution Control menu which ensures a more streamlined management and maintenance.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 17

Users can now create a new POV via the plus sign button.

All other options remain as they were prior to the 19.08 update with one exception.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 18Users were able to manage and update a global context in relation to a POV in the pre 19.08 Rules menu, but this information was displayed alongside the Rulesets and Rules for that POV.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 19

Changing the Global context Dimensions may impact the entire POV set of Rules. Therefore, the configuration of Global context alongside with the POV manager is a logical menu association, part of the 19.08 update.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 20

6.  Increased Flexibility with Model and Data POV

Profitability and Cost Management applications have always had features that enabled fast spin-off of new “What-If” analysis as well as testing different allocation logic rules on the same POV. Oracle has now taken this existing capability to the next level, by enabling users to leverage any Model POV against any Data POV.

Model POV vs Data POVs:

  • The Model POV represents the reference POV that contains the allocation rulesets and rules.
  • The Data POV represents the POV that contains the data values which must be allocated.

In prior versions of PCM, users had to keep Model and Data POVs aligned. If there was a need to test a new set of rules on a data slice, users had to copy the desired rules in the Data POV intersection to be able to launch allocations.

With the 19.08 update that is no longer the case.

The control over which POV is the Model POV is available in the Run Express Calculation menu.

Users can now point to any Reference POV model and run the respective rules onto any other Data POV.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 21

This “run-time association” is not forcing a rule copy, as it would have in prior 19.08 versions.

Because of this new capability, we could potentially have as single set of rules that could be referenced for all our POV’s, without having to copy them across each Data POV every time we would want the allocations to run.

There is one optional parameter that can be called via EPM Automate in order to leverage this Model POV/Data POV reference capability in automated jobs:

epmautomate runcalc APPLICATION_NAME POV_NAME [DATA_POV_NAME] PARAMETER=VALUE [comment=”comment”] stringDelimiter=”DELIMITER

The [Data POV Name] parameter, when specified, enables this pivoting capability between Model and Data POV. If not specified, the automation will assume that the Data POV is the same as the Model POV.

7.  Launch Multiple POV Allocations with One Click

The new 19.08 menu enables users to launch multiple allocations for different POVs at the same time, while also leveraging the functionality of a reference POV described in the previous section of this blog. Through simple check boxes users can select one or many POVs to run allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 22

The 19.08 update Execution Control panel will indicate whether a POV already contains its own rulesets and rules (Model data), so the users can decide if they want to leverage the existing Model associated with Data POV or a distinct Model altogether.

If a reference Model POV is used instead of the Data POV corresponding rules, the Execution statistics report will record that point-in-time reference.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 23

If users select multiple Data POVs to run at the same time, it is important to mention that if a Model POV is selected as reference, it will be applied to all the Data POVs selected to be calculated, whether these Data POVs have Model information (Rulesets and Rules) of their own or not.

8.  Embedded Search Capability in the Execution Control Menu

When launching a single allocation rule, either during development or troubleshooting, the Calculation menu in versions prior 19.08 would have constrained users to scroll through the mass of rules until they found the rule they needed to launch. In larger models, this situation would soon become frustrating.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 24

With the new 19.08 update, users have the possibility of performing a fast search by simply typing portions of the rule name. A dynamic filter is applied and only the rules with that specific string will become available in the drop-down selection.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 25

9.  Easy Access to Job Library

The Job Library is the location of most PCM related logs (except for the Cloud Data Management and Migration logs). With the 19.08 Model menu, it is easier to access it because it shares the same area as the Run Express Calculation menu. The details on the job library will include the reference POV that was used when executing allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 26

10.  Recreate Instance with All File Clear

The Recreate command within EPM Automate enables users to start from a fresh new environment by deleting the existing application and performing a reset on the instance. This month we have an additional parameter that can be leveraged in order to wipe out any existing artifacts, such as backups and other files, that may accumulate and take up significant space over time.

The data storage allowance from Profitability and Cost Management Cloud subscription was communicated back in 2017 to be of 150 G/instance. This is not a hard limit; going over it won’t grind a service subscription to a halt, but over time Oracle may request clients to update their subscription to reflect an increased storage requirement.

*Caution: if you cannot launch some of the EPM Automate commands in your version of this software, you may be running on a prior month release. Upgrade your EPMAutomate utility through the epmautomate upgrade command to align your version of the automation software with the latest PCM version and access all the latest features and parameters added to the library.

Release Calendar for PCM Updates

On the 1st Friday of the month all Test instance will be updated to the current months’ release level, and on the 3rd Friday of the month the same update activity will occur in the Production instance of either PCM or Enterprise Cloud subscriptions.

If for any reason the testing of the 19.08 patch should indicate there are issues, there is a timeframe of reaction when administrators may request to postpone the rollout of the patch to their Production Instance, until the issues uncovered are fixed.

The deadline for such a request is the Weds of the week when the upgrade is scheduled to be applied to the Production environment. In the case of the August Production instance update, that deadline would be on Weds, August 14th. Such Service Request must contain the details of the POD as well as the business reason why the patch update is requested to be delayed.

Conclusion

There are many exciting new updates in 19.08 – from new interfaces aimed at user experience enhancement to backend optimization and new functionality. The Profitability and Cost Management Cloud is being constantly refined based on client and partner feedback, which is why it is so important to become involved in the Oracle community via the Customer Cloud Connect website.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 27

Customer Cloud Connect has become the new space for engaging with Oracle Product Management as well as other members of the community, whether partners or clients.

Create an account today and rate existing Enhancement Ideas if you believe they are beneficial to your user base. The more positive votes, the faster that feature will make its way to your Cloud subscription, based on Oracle’s prioritization list criteria.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

We’ve Got You Covered: Producing Flat-File Extracts out of Cloud Data Management

As an EPM Administrator or Implementation Specialist, we have all had that moment when someone comes to us and asks for the dreaded extract out of an Enterprise Performance Management (EPM) application.  Depending on the system combination (Hyperion Financial Management (HFM), Planning, etc.) and the file layout specifications, this can be tricky.  Layer in the concept of a Cloud application, and things have now gotten real!

In an on-premise installation of Financial Data Quality Management Enterprise Edition (FDMEE), we could use scripting within a “custom application” to build an end-to-end approach for delivering a flat-file extract for third party consumption.  With the release of version 19.06, Oracle has further enhanced this concept and brought it to Cloud Data Management (CDM). The Cloud application now provides the ability to design and produce a text file for downstream consumption in Oracle Cloud products (PCMCS, EPBCS, FCCS).  WHOA!

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 1

The Setup

I recently busted out the functionality and this is what I have discovered:  It’s crazy simple!

  1. Create a text file with your defined headers

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 2

  1. Create a target application and set your settings

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 3

3.  Create an import format

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 4

  1. Create a Location & DLR

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 5

  1. Create the desired Maps
  2. Run the Data Load Rule

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 6

It is that simple!  CDM produces a file that looks similar to this:

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 7

It can PIVOT!?

As crazy as it sounds, it can even pivot the data!  I find this extremely helpful as it is a common request to have twelve months of data in column format.  CDM leverages the PIVOT command of the database for this process and creates the pivot file with ease and efficiency.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 8

What does it do behind the scenes?

Behind the scenes, CDM appears to run a standard import and validation of the data, but it leverages a different set of workflow instructions.  The process does not consider unmapped items which are left as blank fields in the output.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 9

It also does not permanently store any data in the CDM repository unless you want it to.  The documentation can be easily misinterpreted because you will see “fish,” but no data is stored (more on this later).  A quick review of the process details log shows that all the work is done in the “tDataSeg_T” table.  This is the “temporary working” table of Data Management, and it is cleared after/before each new run for optimal performance.  Since the data is never moved out of this table, it is never retained.  Even the export process that produces the output file pulls from the temporary table.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 10

A review of the Documentation shows that there are 3 main supported types for processing:

  • Simple (the option selected here) – Does all the work in tDataSeg_T and does not retain any data or archive maps. Although, be warned, it does retain the process details and “fish” status which can look a bit strange.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 11

  • Full No Archive – Data is retained in tDataSeg only after the import step. Data is deleted after the export.
  • Full – All data is retained. Full process flows are supported (check rules, drill down, etc.).

That’s great, but my file is stuck in the Cloud!

Not really…let’s think this through in a workflow process.  When using Cloud applications, we might have an automation wrapper or a larger workflow process.  If not, we are using the general user interface (GUI), and we can access the file in two ways:

  1. Data File Explorer
  2. Process Details -> Download File option

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 12

If we are using a more automated approach, we just need to include additional steps to:

  1. Monitor the data load rule for completion
  2. Verify the status of that completion (do not proceed forward if it failed; do something different)
  3. Confirm that the file was created
  4. Download the file that was created
  5. Continue the automated routine

In Summary…

It is simple to produce a file using Data Management in the EPM Cloud products.  This is a welcomed change that further enhances the product lines by delivering on client needs.  This allows us to build a simplified Cloud solution that was previously only on-premise.

If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

Enterprise Data Management: Version 19.07 – Top 3 Added Features

The latest 19.07 release of Enterprise Data Management (EDM) contains a boatload, a plethora – no, make that oodles of new features (time to put Merriam-Webster away). The full release notes are documented here: EDM July 2019 Update

In this blog, I’d like to dive into 3 of my favorite features (so far) from this release.

#1: Create Request Items from Compare Results

I’m already using this feature at one of my current clients. I’ve always touted the “Compare” feature of EDM. It’s been one of EDM’s best features from the beginning and provides the ability to compare Missing Nodes, Relationship Differences, and Property Differences. The biggest gap?  Previously, when the compare results are returned, there was no way to download or utilize the results in a request to resolve the differences. While you could always fix differences manually with drag-n-drop or direct property updates, when your compare results return dozens or hundreds of differences, that is not a reasonable option.

Well fear no more!  You can now create a request load file directly from the compare results. And it works as easily as you might have hoped it would:

  1. Run a Compare between two viewpoints.
  2. When the differences are returned, click “New Request.”
  3. Notice that new icon? Click that bad boy and a request load file will be automatically generated.

Kevin Black - EDM V19.07 - 7-24-19 - Image 1

4.  Now you can see that the request file has been attached to the request and the request items are added to your “shopping cart.” Make any additional changes, submit your request, and your viewpoints are synchronized! Easy peasy.

Kevin Black - EDM V19.07 - 7-24-19 - Image 2

NOTE

What is interesting is to analyze the request file that is generated from the compare result. You will notice it contains UPDATE and PROP_UPDATE actions. Why those? Well, PROP_UPDATE is used for property differences returned by the compare. And UPDATE is used for missing node and relationship differences. The UPDATE command is quite sneaky and powerful. Not only will it update node properties, but, depending on if the node exists and if the hierarchy set allows Shared Nodes, it will also UPDATE and perform an ADD, INSERT, or MOVE, too. Pretty cool.

#2: Property Editing

This enhancement not only provides property editing capabilities not available previously, but these edits can also be performed directly on the property without going through the App Registration wizard.

From the Properties card, Inspect the property you wish to edit. You’ll notice the Edit button is now enabled. From here, you can modify property default values, make the property editable or read-only, and modify the “Allowed Values” list if applicable.

No more stepping through the App Registration wizard to apply a simple property update!

Kevin Black - EDM V19.07 - 7-24-19 - Image 3

#3: Import/Export of Allowed Values

I’m so happy this feature is now available. My fingers, and my keyboard, thank you, Oracle!

From the same property editing Inspector dialog mentioned above, you can now modify the “Allowed Values” in a pick-list property without stepping through the “App Registration” wizard. Click the “+” sign to add new values manually. Not shown, but also available in the Actions menu, is the ability to delete or reorder existing list values.

But notice there is now an Import/Export capability. This utilizes a basic Excel file for mass upload/download of list values.

This will be a lifesaver for me at another project where I have Smart List and Attribute properties to build in EDM that contain dozens of list values.

Kevin Black - EDM V19.07 - 7-24-19 - Image 4

Below is a screenshot of the Excel file used to populate this property:

Kevin Black - EDM V19.07 - 7-24-19 - Image 5

Before I close, I’m going to cheat a bit and throw a shout out to one more enhancement in version 19.07…

Honorable Mention: Inspector Dialog Sizing

While custom resizing of the inspector dialog isn’t possible yet, the larger Inspector dialog size certainly makes it easier to view your data chain objects and reduces the amount of scrolling required. This is useful, especially when you’re viewing or reordering a bunch of properties in a viewpoint or node type!

That’s it for now. Be sure to check out EDM version 19.07 if you haven’t already. It contains additional helpful enhancements beyond the few I’ve highlighted here. Stay tuned for more upcoming EDM blog posts. If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

EDMCS and Data Governance – Part 3

Welcome to Part 3 – the finale – of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release.

Part 2 discusses Workflow Stages in greater detail and dives into the brains of EDMCS workflows – the Approval Policy. Approval policies at different levels of the data chain are explained, and we conclude by building a sample workflow at the dimension level.

In Part 3, I’ll attempt to tie a bow around everything and offer some parting thoughts.

Recap

As I continue to explore and learn about collaborative workflows in EDMCS, these are the key points that come to mind:

  • Emphasize the Fundamentals – No matter what tool you are using, People and Process are extremely important in any data governance solution along with strong executive sponsorship and robust change management.
  • Build the Foundation – get the client comfortable with the tool and content before you introduce workflows. A strong foundation (your applications, dimensions, views, and viewpoints) is needed before you start the plumbing and wiring (workflows).
  • Brush up on Security – I haven’t discussed security extensively in this blog series, but the Oracle EDMCS User Guide does a nice job describing security requirements for assigning and approving workflow requests. Note that security enhancements have been introduced along with workflows. A new “Submitter” permission is now available to go along with Owner, Data Manager, and Browser. And permissions can be assigned at the Application, Dimension, Hierarchy Set, and Node Type levels.
  • Ponder the Approval Policy – this is the most interesting one to me. As we discussed in Part 2, approval policies can be defined at 4 points in the data chain (see Figure 1). With the inheritance and inter-dependencies of approval policies across the data chain along with the actions each policy can govern, it is critical to efficiently design your approval policies up front.

o   For example:

  • Suppose your client requires a final “audit” type of approval across the board for any type of request for any dimension. Or they always a require an upfront “gatekeeper” type of approval to make sure the request is justified and complete before it continues down the approval chain. These would be good candidates for an approval policy at the Application level. And it would avoid having to define duplicative approval policies at lower levels in the data chain.
  • Will your application contain dimensions that do not need data governance workflows? Then Application level approval policies should be avoided.
  • Say you want to limit and govern the actions of a specific group so it can only work with existing nodes (insert, remove, update). An approval policy at the Hierarchy Set level is probably best.

o   Overall, I believe approval policies at the dimension level are a good place to start. Then as the workflows evolve and requirements become more clear, you can determine if there are common factors across all dimension approval policies that can be consolidated at a higher level (Application level approval policy), or if there are specific subsets of actions that need to be broken out to a lower level (Node Type or Hierarchy Set level approval policy).

o   All of which brings up another interesting point: effective approval policy design directly ties into effective viewpoint design. Think about it – you can define the set of Allowed Actions (Add, Insert, Move, etc.) at a Viewpoint level. Which means what? Special-purpose maintenance views are likely required to support certain approval policies, especially those at the Node Type or Hierarchy Set levels.

Figure 1 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 3 - Image 1

How do EDMCS Workflows Compare with DRM/DRG?

I was reluctant to include this section at first because in general, I don’t like comparing Data Relationship Manager (DRM) and EDMCS. Yes, they are both master data management tools and yes, they do share some common concepts and terminology. But overall, the two products are so different in terms of philosophy, deployment design, and underlying architecture that I think comparing the products is often less than helpful.

However, with data governance and collaborative workflows, I feel there is enough commonality that it is worth highlighting a few items. So here goes:

Topic DRM/DRG EDMCS
Workflow Design
  • Based on workflow models and workflow tasks
  • Tasks linked to specific actions (Add Leaf, Add Limb, Insert, Move, etc.)
  • Based on Approval Policies
  • Approval policy level (Application, Dimension, Node Type, Hierarchy Type) determines context and scope of actions governed

 

Workflow Stages
  • Use a Submit stage, a Commit stage, and optionally, one or more Enrich and/or Approve stages
  • ·Use a Submit stage and (implied) Commit stage
  • Approval policies determine approval stages (sequential vs parallel, # of approvers)
  • Requests can be re-assigned for collaboration prior to Submit
User Interface (UI)
  • Form-based design
  • No forms
  • Requesters and approvers interact directly with the viewpoints
Approval Options
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
Escalations
  • Requests can be escalated based on defined intervals
  • Requests can be escalated based on defined intervals
Separation of Duties
  • Workflows can be configured to prevent a submitter from approving their own request
  • Workflows can be configured to prevent a submitter from approving their own request
Email Notifications
  • Generates email notifications
  • Generates email notifications
Other
  • Supports conditional workflows
  • Supports splitting of requests based on pre-defined criteria
  • Not yet supported

I’m curious if Oracle will introduce a form-based UI for workflows. Part of me would very much like to see that so that you can present a clean user interface to the approvers, hide unnecessary details, and display special instructions and messages, but part of me does not. One of my favorite features of EDMCS is the visual highlighting of pending request changes and the “shopping cart” of request items that are displayed prior to submitting a request. I would hate to lose that by going with a forms-based workflow UI, but perhaps there is a solution that combines the best of both worlds. 

Conclusion

Well that’s it, an initial look at workflows and approval policies in EDMCS. I’m excited to see how this functionality evolves and expands over time. Talk to you next time!

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

EDMCS and Data Governance – Part 2

Welcome to Part 2 of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Oracle Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release. This exciting feature addresses a major gap in EDMCS as the product continues to rapidly evolve and mature.

In Part 2, we dive into the details of how to configure workflows. This process revolves around the concept of an “approval policy.” Interestingly, approval policies can be configured at different points of the EDMCS data chain and cascade or inherit to affect downstream points of the data chain.

Workflow Stages

Before we dive into approval policies, let’s discuss EDMCS workflow stages a bit more. They are similar in concept to Data Relationship Governance (DRG) workflow stages. See Figure 1 for an overview:

Figure 1 – EDMCS Workflow StagesEDMCS and Data Governance – Part 2 - Image 1
  1. Submit (or Assign) Request – A request is initially created as you do today. But wait…there’s more! You can Submit the request to immediately move the request into the Approve stage OR you can Assign the request to colleagues to collaborate on the request together. When the request is ready, it is submitted to move to the Approve stage.
  2. Approve Request – The approver(s) have 3 choices:
    • Approve – the request is approved and moves forward (thanks Captain Obvious!).
    • Push Back – like DRG, the request is pushed back to the submitter for clarification or changes, who then updates and resubmits the request.
    • Reject – like DRG, the request is denied and closed. Think of “reject” as the RAID of the data governance world – it kills requests dead.
  3. Commit Request – once fully approved, the request is auto-committed and closed. EDMCS has now been updated.

Approval Policies

Now for approval policies. Approval policies can be configured at 4 levels:

  1. Application
  2. Dimension
  3. Node Type
  4. Hierarchy Set

It is important to note that each data chain object can contain one, and only one, approval policy. However, approval policies have a cascading impact so that multiple approval policies can work in concert to govern and control exactly what you want. Yes, you heard that right:  Approval Policy Inheritance – it’s not just for properties anymore!

The types of actions governed by an approval policy depend on the data chain object it is configured with – see figure 2 below:

Figure 2 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 2 - Image 2As you can see, policies defined at the Application or Dimension level govern all actions (add, delete, insert, remove, move, etc.) while policies defined at the Node Type or Hierarchy Set level govern a subset of actions. Why is this important? Because it means you need to carefully design what types of actions you want to govern and who will perform them. If I define an approval policy at the Hierarchy Set level and then submit a request that Adds 3 accounts, how many approvers are required for the request? A big ZERO! Since I requested “add” actions and only have an approval policy at the Hierarchy Set level, no applicable approval policy exists to govern the request.

Putting It All Together

Let’s walk through an example.

  1. Define Approval Policy

First, I will define an approval policy for the Account dimension. To do this, Inspect either the application or default viewpoint and access the Account dimension from the Definition tab. From there, click the Policies tab.

Here you will see the Approval policy for the Account dimension. Click on the Approval link to inspect the approval policy.

EDMCS and Data Governance – Part 2 - Image 3The General tab will display basic information about the approval policy. You can edit the approval policy name and description if necessary.

EDMCS and Data Governance – Part 2 - Image 4The Definition tab is where the magic happens. Select edit to update the following parameters:

  • Enabled – click this check box to enable the approval policy.
  • Approval Method – select Serial or Parallel.
  • One Approval Per Group – if using Serial approvals, this will automatically be set to “True.” If using Parallel approvals, you can select one approval per group or define a Total Required # of approvers.
  • Include Submitter – enable this to allow the submitter to also be an approver (the submitter’s approval will be automatically granted). If “separation of duties” is required for your company, do not enable this.
  • Reminder Notification – the # of days that will elapse before reminder emails are sent.
  • Approval Escalation – the # of times a reminder occurs before an escalation email will be sent.
  • Approval Groups – select user(s) and/or group(s) to be included in the approval process. When using Parallel approvals, the order of approval groups does not matter. When using Serial approvals, the order of approval groups does matter – you need to list the approval groups in the order that approvals should be executed.

With my example approval policy, I am using serial approvals, 2 approval groups (a Planning group and GL group), a reminder interval of 5 days, and an escalation interval of 2 reminders.

EDMCS and Data Governance – Part 2 - Image 5

  1. Submit Request

Now we’re cooking with gas. It’s time to submit a request. I will submit a request to my default Account viewpoint that includes 1 add, 1 property update, and 1 move. Here is the request in Draft status:

EDMCS and Data Governance – Part 2 - Image 6

Did you notice something new? Look at the Actions button next to Submit. This is where you can assign the request to another user and collaborate with him to finish up the request.

EDMCS and Data Governance – Part 2 - Image 7

EDMCS and Data Governance – Part 2 - Image 8

  1. Approve the Request

After the request is submitted, it is considered “in flight” because it has been submitted, but not yet approved/committed. And look! EDMCS now offers a nice Activity page on the home screen displaying the status of various workflow requests:

EDMCS and Data Governance – Part 2 - Image 9

First, the users in the Planning Approvers group will receive an email notifying them that they have been “invited to approve a request” (it’s very polite):

EDMCS and Data Governance – Part 2 - Image 10

As mentioned earlier, an approver has 3 choices: Approve, Reject, or Push Back. Reject and Push Back are available under the Actions dropdown. Here are the dialog windows that will be displayed for those actions (note the comment field is required):

EDMCS and Data Governance – Part 2 - Image 11

Otherwise, the approver will click the Approve button and see this:

EDMCS and Data Governance – Part 2 - Image 12

And then the same process will continue with the GL Approvers group since I am using Serial approvals. Once again, an approver can reject, push back, or approve. Once approved, the request is committed and closed.

Congratulations! You have now completed your very first data governance workflow request in EDMCS!

Conclusion

This blog post should be useful in providing more details and clarity on workflows, workflow stages, and approval policies. In the third and final post for this series, I’ll offer a recap and some closing thoughts. Talk to you then.

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 3

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

OPA! The Future of Cloud Integration – Important Updates Are Coming

Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM.  Why do they not like that I do this?  Well there is a master data management tool for Customer data that you can guess also uses the same acronym.  While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.

I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community.  We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.

Now any good Oracle roadmap update would not be complete without a safe harbor reminder.  What you read here is based on functionality that does not yet exist.  The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.

Ok, now that we have that out of the way, let’s get into the fun stuff.  There are a number of enhancements coming and planned, but today I am going to focus on two significant ones:  performance and ground to cloud integration.

Performance Enhancements

We’re all friends here, so we can be honest with each other.  CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does.  You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.

If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM.  Essbase and Planning became targets later.  Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS.  While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal.  This dynamic has plagued users of CDM for years.  It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.

To address performance concerns, Oracle is planning to support 3 workflow methods:

  • Full – No change from legacy process
  • Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export.  This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process).  The downside of this method is that drill through is not available.
  • Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg).  This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.

Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows.  To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.

If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement.  If you need assistance, feel free to reach out to us to connect with our team of data integration experts.

On-Premise Agent

Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud.  As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions.  While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM.  This is very much a push approach to data integration.

The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.

Opa! – a common Greek emotional expression. It is frequently used during celebrations.  Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!

This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM.  This approach is similar to how the FDMEE SAP adaptor currently works.  From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.

OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS).  A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request.  This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall.  If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.

The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle.  Additionally, OPA will support connecting to on-premises relational data sources.  Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.

OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process.  This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE.  This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.

The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting.  How you might ask?  In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud.  But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code.  This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.

The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment.  I am thrilled and can’t wait to get my hands on OPA for beta testing.  I’ll share my updates once I get through testing over the next couple of months.  I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public.  Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!

EDMCS and Data Governance – Part 1

Ahh… February. An interesting month with a variety of happenings. From the significant – Black History Month and President’s Day, to the exciting – the Super Bowl…well sometimes. From the romantic -Valentine’s Day, to the silly – that tenacious ground hog trying to find his shadow…AGAIN. Not to mention that Spring is just around the corner and brings us the glorious event known as “March Madness!”

Why am I babbling about February? <segue> Because it is also the month that introduced Data Governance and Collaborative Workflows with the release of Enterprise Data Management Cloud Service (EDMCS) v19.02. <segue>

As we continue this journey to Enterprise Performance Management (EPM) Cloud, the addition of Data Governance to EDMCS is a major step forward, especially for those of us who have worked with the classic on-premise solutions (Data Relationship Management (DRM) and Data Relationship Governance (DRG)) and who have been awaiting a similar offering in EDMCS to support our Cloud clients. From what I’ve seen so far, a major gap between DRM/DRG and EDMCS has been addressed with this release.

In this blog series, I’d like to further explore Data Governance in EDMCS. At a high level, this is how I see this series unfolding:

  • Part 1 will provide the foundation, background, and basic concepts for EDMCS and Data Governance
  • Part 2 will get more into the “techy” stuff and dive deeper into Approval Policies and Security
  • Part 3 will provide a recap and closing thoughts/lessons learned

So, with that said, onto Part 1…

Prerequisites

Before diving head first into configuring Data Governance and collaborative workflows in EDMCS, there are a few things to consider.

  • Don’t forget people and process. I’m a big believer that people and process are just as (and usually much more) important as the tool. Please refer to this blog post for a quick read on this: The Data Governance Triple Crown.

I believe the same tenets apply to EDMCS and that it’s important to start thinking about a formal data governance program that includes a charter, executive sponsorship, roles & responsibilities, metrics, and much more. Data Governance can be a challenging cultural shift for many organizations which requires strong change management to handle the inevitable resistance. This is where a formal data governance framework can help.

  • Establish the foundation. As with building a house, it’s important to lay a solid foundation before you install the wiring and plumbing. Build your EDMCS application(s) and dimensions, and populate your primary and alternate hierarchies first. Get the client comfortable with the tool and the content. Then you can start to layer in the workflows.
  • Start to identify the “who” (e.g. the people involved and the roles they will play: who will be submitting requests? Who will be approving? Who will do both?
  • Start to think about the “what.” What applications/dimensions/hierarchies will be governed? What are the use cases and typical scenarios that require data governance? Start to collaboratively mock up and storyboard some typical workflows with the client to visualize how the workflows will function. And don’t try to build a workflow for every possible scenario. Start with the big hitters and low hanging fruit first. You can always add more workflows later.

What’s Included in EDMCS Workflows?

Are you wondering what EDMCS includes as far as data governance functionality? In summary, EDMCS supports:

  • Two types of roles – submitters and approvers
  • Separation of duties – workflows can be configured to prevent submitters from approving their own requests
  • The “four eyes” principle: EDMCS data governance adheres to the principle that requests must be approved by at least two people
  • Default application views and maintenance views: workflows can work with both types of views
  • Subscriptions: workflows can be triggered by Subscription requests
  • Email-based notifications
  • Serial and Parallel approvals:
    • Serial approval means a sequential order of approvals is required. For example, Approver #2 can’t approve until Approver #1 approves, Approver #3 can’t approve until Approver #2 approves, and so on.
    • Parallel approval means the approvals can occur in any order and at the same time.
    • With either method, all approvals must occur before the request is committed.
  • Configuration of Reminder and Escalation intervals
  • Multiple Workflow Stages:
    • Submit – initiate the request and add/edit/delete line items in the request. Note that with the 19.02 release, you can also attach documents and insert comments at the line item level. These enhancements are helpful to attach policies, supporting details, and other documentation related to the workflow request.
    • Approve – similar to DRG, an approver can approve, push back, or reject a request. Pushing back will send the request back to the submitter for additional changes. Rejecting will close the request and end the workflow.
    • Commit (implied) – once the request is fully approved, it is committed, hierarchies are updated, and the request history can be viewed like any other request.
  • Approval Policies – this is really the brains of how workflows are configured in EDMCS, and the next blog post cover this in greater detail. But here is a screenshot of the Approval Policy screen showing the available options:

Kevin Black - EDMCS and Data Governance - Part 1 - 3-8-19 Image 1

Conclusion

I hope you found this blog post helpful as an introduction to EDMCS and data governance, and that you will keep reading as the rest of the series is posted. Please contact me with any questions and comments!

And don’t forget to follow me on Twitter (@kblackEPM) and check out/subscribe to my blog (along with the blogs authored by my very talented colleagues at Alithya).

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 2

https://ranzal.blog/author/kblackranzal/

https://ranzal.blog/

Interested in better understanding EDMCS, the RESTful API, and Cloud Data Management? Be sure to check these excellent blog posts by Tony Scalese, aka FDM Guru: https://ranzal.blog/author/ascalese/

Looking for an outstanding resource for all things master data-related and more? Look no further!  https://datarestless.com/

Oracle’s ARCS Patch 1812 and Patch 1811 Review: Gazing into the Crystal Ball

Peruse the Account Reconciliation Cloud Service (ARCS) forums on Oracle’s Cloud Customer Connect and you’ll notice a theme: Transaction Matching. Questions, comments, and critiques have been flooding in from across companies and industries, clients and consultants alike. Combine this with Oracle’s game-changing announcement of the EPM Cloud price simplification plan teased for 2019 – that is, the strategic move to strictly sell bundled EPM Cloud products in the near future (more on this another time – it’s a doozy) – the changes released for ARCS in Patches 1811 and 1812 could not have come at a more opportune time. Furthermore, these changes provide a sneak peek into Oracle’s crystal ball of what’s to come.

The WHAT has come: Changes for ARCS at the end of 2018

The most important change to ARCS from the 2018 season finale, Patch 1812, is the shift from having separate reconciliations between Transaction Matching and Reconciliation Compliance to one standardized use of Profiles. This is configured through the new reconciliation methods provided in Formats (Balance Comparison with Transaction Matching, Account Analysis with Transaction Matching, and Transaction Matching only).

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 1

The implication is that Transaction Matching reconciliations receive all the benefits that previously only Reconciliation Compliance enjoyed, including but not limited to: bulk uploads/updates to Profiles and reconciliations, access to new Workflow options such as Reviewers and Teams, and detailed filtering options including the more hidden statistical metrics (such as attributes related to count, etc.). It is important to note, though, that these new features will almost exclusively relate to new reconciliations using one of the two ‘*with Transaction Matching’ format options, as seen below. Still, the opportunity for clever design is there.

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 2

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 3

Furthermore, to support this change, Period will now be shared between the two feature sets. Additionally, reconciliations that are performed in Transaction Matching will now utilize their period-end Balances loaded to Reconciliation Compliance. While historically there have been business processes put in place to ensure that the balance loaded to Transaction Matching equaled the balance loaded for the month-end reconciliation in Reconciliation Compliance, patch 1812 ensures that a system process governs the data’s integrity – certainly a more reassuring thought.

Two additional under-the-radar features introduced in Patch 1812 are (1) the ability to have Workflow that includes multiple members while not requiring an order precedence to the work and (2) the option to now have end-users approve their own re-assignments, reducing the administrative bottleneck. These changes provide value-add functionality that demonstrate Oracle’s willingness to listen to customer feedback even during these more “stuffed” patches.

The last item to mention was actually included in Patch 1811. In Transaction Matching, a text file can now be generated with the transactions or adjustments from the tool which can then be uploaded to the ERP source systems as a journal adjustment. This has been an ongoing request, and I am happy to see it finally actualized.

The WHAT does it mean: Implications and Expectations for ARCS in 2019

Transaction Matching’s relative strength to its competitors is becoming increasingly apparent, as Oracle continues to sure up areas in need of support while also providing updates that show a sensitivity to market demand. The move to unify Transaction Matching and Reconciliation Compliance is not a new idea, as Patch 1805 made apparent with the uniting of the two UIs (and much more – see Oracle Product Management’s webinar update here), but nonetheless is a bold one that I anticipate will pay dividends. The automatic conversion of Transaction Matching reconciliations to Profiles is a nice touch too, making the transition an easier pill to swallow for skeptical clients who I am sure were not eager to pay expensive consulting fees for this. Even smaller changes such as providing a space for strictly manual matching (i.e. without Auto Match rules; Patch 1811 change) demonstrate ARCS’ commitment to be an approachable and modular product that grows with your company – a benefit I have consistently touted in the past, and I expect to continue to do so in the future.  More details about the benefits of ARCS are shared in the posts A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS) and Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100”

Changes continue to come to ARCS that only slowly trickle, if at all, down to Account Reconciliation Manager (ARM). This was true for the Variance Analysis reconciliation method which arrived in May 2017 for ARCS, but not until Dec 2017 for ARM, and it is a fair guess that this will be true for the aforementioned “All Preparers” and “All Reviewers” workflow options and end-user re-assignment configuration setting. Combine this with more and more dollars being invested in Transaction Matching compared to Reconciliation Compliance (from where I’m looking, anyway), and the message is clear on who the favorite is in the Oracle product family. While ARM contains strong functionality as an on-premise option, expect the functionality gap to increase compared to its Cloud counterpart.

Lastly, the inclusion of a journal adjustment export out of Transaction Matching is a combo solution: a “we can do that too” to product competitor Blackline’s existing functionality as well as a demonstration of Oracle’s willingness to think outside of the product. This highlights ARCS’ flexibility as a tool capable of being used within other processes. In fact, the Oracle EPM Cloud ecosystem is one of ARCS’ biggest strengths over its competitors.  I would love to see this journaling ability out of Reconciliation Compliance as well which would provide the functionality to most ARCS clients. Regardless, this is a step in the right direction.

This post has been cross-posted on the #DataRestless blog site – read it here and other Oracle-related posts as well.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.

 

A Cloud vs. On-Premise Comparison for Profitability: All You Need to Know

In a previous blog post, the history of Hyperion Profitability and Cost Management (HPCM) was discussed along with which modules made it to the Cloud. If you are after a more clear-cut comparison between Cloud and on-premise, the below table should fit the bill. Tables generally cannot provide all the needed context, yet they are, at times, the best starting point to understand the benefits and capabilities of one solution compared to another.  The below PCMCS vs. HPCM table is not exhaustive, and if you have questions on any of the items covered, email us and we will provide further details.

PCMCS 12-11 Image 1

Choosing between on-premise and Cloud depends on which factors are the most significant barring the overall licensing cost.

Allocations and data assignments cannot have “If” statements attached to them in the on-premise version of the software – a feature fundamental to supporting Tax transfer pricing capabilities.

The cross-dimension mapping is a functionality that is not available in HPCM. This mapping ensures the assignment of data sets to the same ID/name across multiple dimensions by using the “Same as Source but Different Dimension” option within PCMCS to support intercompany activities. This feature alone, or the lack of it, may significantly impact the design of an application and the overall complexity of allocation flows.

Features available in the Cloud but not yet released in on-premise solutions could tip the scale to favor the Cloud option when all other aspects surrounding a Cloud implementation no longer appear to be as pressing. Out-of-the-box content such as overnight backups, full application, and data restores that are at the business users’ fingertips – not to mention the reporting and dashboarding included in the Cloud version – are all differentiators of a product that enables business users to control their allocation process and methodology from its inception.

While there may be exceptions to the trend where on-premise solutions can have advantages (modules not available in the Cloud, for example) and, therefore, represent the best option at a given moment in time, the reality is that the future is being developed in the Cloud and for the Cloud, and at some point the shift will most likely no longer be an option, but a necessity.

If you need help making a decision with an existing implementation or you would like more details about HPCM vs. PCMCS to make a better informed decision, email us at infosolutions@alithya.com. Our PCM Center of Excellence team is ready to share leading practices and industry-specific solutions that accelerate the ROI and expand the capabilities of your chosen software.