Alithya’s PowerShell Accelerator for Ground-to-Cloud EPM

Oracle provides a powerful toolset for interaction with the EPM Suite through a set of REST APIs and a downloadable product called EPM Automate that provides for command line access to a significant portion of the REST APIs.  At many of our customers, we implement scripts to orchestrate and schedule processes.  These processes execute EPM jobs, transfer metadata and data to the EPM Suite, and download data from the EPM Suite.

We believe in standards to facilitate common implementations across our large customer base and to manage the evolving nature of Oracle’s EPM Suite.  Standardization improves our ability to support our customers, and we have taken steps to consolidate our scripting into a single preferred service accelerator that provides high-quality, implementation-proven script utilities in a packaged delivery.

When we started this effort, we established a set of criteria for what we wanted to accomplish:

  1. Provide scripts that work in either Windows or Linux environments.
  2. Apply scripting best practices in a packaged delivery.
  3. Improve quality, delivery performance, and supportability by having a set of scripted functions that are unit tested prior to first use at a customer.
  4. Provide a standard approach to setup of jobs including signing into EPM Automate.
  5. Provide a standard logging framework.
  6. Provide exception handling including emailing.
  7. Provide a standard approach to archival of transferred files.
  8. Provide a standard approach to post procedure clean-up of temporary files.
  9. Provide ability to run scripts individually and together.
  10. Allow the calling scripts to be easily readable.

Why PowerShell?

Establishing the programming language was foundational and involved conversations about batch, Bash and PowerShell.  Although each language has advantages and weaknesses, the Product team selected PowerShell for the following reasons:

  • Robust interpreted programming language that includes standard capabilities such as variable, functions, loops, exception handling, etc.
  • Native on Windows environments with a nice development environment, PowerShell-ISE.
  • Can invoke commands and batch scripts easily.
  • Intended future direction for Microsoft with strong on-line support.
  • Open-sourced and available on Linux, testing showed that little or no modification to scripts is required for use in Linux environments.

What are we Providing?  A Working Example

We provide a packaged set of utilities called EPMAutomatePowerShellUtilities as an accelerator to development of ground-to-cloud scripts.  EPM Automate is in the name because we primarily use EPM Automate to accomplish an action, but also use the REST APIs when EPM Automate does not provide the required action.

To highlight the accelerator, lets document a working example with a customer implementing a Profitability and Cost Modeling Cloud Service (PCMCS) solution.

Customer is providing dimensional data and content data files and needs the following actions:

  • Upload Dimensional Data and integrate into PCMCS
  • Upload Content Data and Run Allocations
  • Download Post Allocated Results
  • Run all the above as a Single Script

First, the Boiler Plate

All customer scripts have the following boiler plate to provide common behavior

try

{

  • $PSScriptRoot/config/properties.ps1
  • $epmautomatepowershellutilities/Utilities.ps1

    Pre-Job-Run $Profile

    #Place your actions here!

}

catch

{

    Email-Exception

}

finally

{

    Post-Job-Run

}

What is going on?

  • try … catch … finally – allows for exception handling and script resolution in a common pattern. Standardized exception handling improves the quality of the ETL process by ensuring that support personnel are notified via email for any process execution stoppage.
  • – $PSScriptRoot/config/properties.ps1 – loads the variables required to run the scripts. For example, we load $ApplicationName which is the PCMCS application with which we are working.  The properties.ps1 is a text file that requires very little maintenance after initial setup.
  • – $epmautomatepowershellutilities/Utilities.ps1 – loads all the custom functions we provide.
  • Pre-Job-Run $profile – sets up job and makes it ready to run including signing into EPM Automate.
  • #Place your actions here! – this is where the custom actions are placed. See Scripts 1, 2, 3, and 4 below for examples of custom actions.
  • Email-Exception – when an exception occurs, then email an error message including a zip of the temporary folder that contains process log and any other files that were created by custom actions.
  • Post-Job-Run – clean up after custom actions are complete by signing out of EPM Automate and optionally removing temporary folder (configurable).

Script 1: UploadDimensionData.ps1 – Upload Dimensional Data and Integrate into PCMCS

We won’t repeat the boiler plate and focus on the custom actions:

Upload-DimData-And-Load $ApplicationName “$inboxFolder\Dimensional Data”

Enable-App $ApplicationName

Deploy-Cube $ApplicationName -KeepData -RunNow

Readability is a huge factor here.  We really don’t need to explain what these custom actions are doing, but let’s highlight a couple of things.  First, the called function often looks a lot like a corresponding EPM Automate command; for example, “Enable-App” corresponds to the EPM Automate command “enableApp.”  Second, we provide more complex calling functions such as “Upload-DimData-And-Load” to perform a set of common actions that run multiple commands – in this case the upload of multiple files – and then run the loadDimData command for all the uploaded files.  Behind the scenes, an archive copy with a timestamp is placed in an archive folder for each of the uploaded files.

Script 2: UploadData.ps1 – Upload Content Data and Run Allocations

Again, without boiler plate:

Clear-POV $ApplicationName “VR_Working;SC_Forecast” -InputData -AllocatedValues -POVDelimiter “;”

Copy-POV $ApplicationName “NoVersion,SC_Forecast” “VR_Working,SC_Forecast” -isManageRule

Upload-Data-And-Load -ApplicationName $ApplicationName -Path “$inboxFolder\data” -DataLoadValue “OVERWRITE_EXISTING_VALUES”

Run-Calc -ApplicationName $ApplicationName -ModelPOV “VR_Working;SC_Forecast” -ExeType “ALL_RULES” -ClearCalculated -ExecuteCalculations -RunNow -isOptimizeReporting -POVDelimiter “;”

You’ll see a mix of EPM Automate analogs and a complex function that uploads all the content data and loads them into PCMCS.  Again, the archival of uploaded files occurs during the Upload-Data-And-Load function.

Script 3 – DownloadResults.ps1 – Download Post Allocated Results

The custom actions are:

Export-Query-Results $ApplicationName “PCMCSDataExport.txt” “Post_allocated”

Download-File “profitoutbox\PCMCSDataExport.txt”

Script 4 – JustDoIt.ps1 – Perform all Three Steps

The boiler plate is built so that the Pre-Job-Run, Email-Exception, and Post-Job-Run understand when they are inside a calling script.  This allows you to create a parent script to run multiple other scripts without modification of the called scripts.

Focusing on the custom actions:

. $PSScriptRoot\UploadDimensionData.ps1

. $PSScriptRoot\UploadData.ps1

. $PSScriptRoot\DownloadResults.ps1

In this parent script, EPM Automate is logged into a single time.  Any exception results in full stop and a single email sent with an integrated process log.

Final thoughts

The accelerator provides a high-quality framework allowing Alithya to focus on customer requirements and the actions needed to integrate with Oracle’s Cloud EPM suite.  The low level, expected behaviors, such as exception reporting, logging, emailing, and file archival are available on day 1 of the engagement.  With a focus on readability, these scripts are easily transferred to the support organization for long-term sustainability.

For long-term support, the customer can update the REST API version via the properties.ps1 file, and Oracle is providing EPM Automate updates that do not break prior scripts.  If EPM Automate has a breaking change, the customer can update the utilities themselves or request an updated set from Alithya.  Feedback from our customer base is positive with specific comments about the readability of the utilities and quality of initial deployment.

Overall, the accelerator is reducing the effort and time to deploy ground-to-cloud processes while improving the quality of deployment by reducing the time spent creating and debugging scripts.

Additionally, long-term support costs are lower through standardization of implementation patterns that allow support personnel to focus on what the script is accomplishing rather than how it is accomplishing it.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

Profitability and Cost Management v19.08: Updates, Insight, & Impact

Oracle Cloud Service subscribers are used to monthly updates being applied to their Test and Production instances on the first and, respectively, third Friday of each month 

However, not all monthly updates are created equal. 

If you own a subscription to Profitability and Cost Management Cloud, you may have noticed a brand new menu with the latest update that occurred this past Friday, August 2ndMore importantly, the updates in the latest version of PCM are not only on the surface.  

Here is a list of all the announced updates as well as some other insightful findings and their potential impact: 

  1. Custom Calculations Bug fixed
  2. New Model Menu
  3. Designer menu – 2 in 1!
  4. Increased transparency during rule build
  5. Integrated POV Manager
  6. Increased flexibility with Model and Data POV
  7. Launch multiple POV allocations with one click
  8. Embedded search capability in the Execution Control menu
  9. Easy access to Job Library
  10. Recreate instance with all file clear

1.  Custom Calculations Bug Fixed

If you were holding off on applying PCM patches because of reasons pertaining to Custom Calculations issues in prior updates, then this is the update you have been waiting for!  Early testing of the updates in PCM v19.08 indicates that bugs found in past versions in relation to complex custom scripting have been solved.

*Caution:  Each Custom Calculation is unique, and thorough testing is crucial before the scheduled v19.08 update is pushed to the Production instance. 

All software updates should be tested using a data set that can be easily compared with results from a prior version of the software.

Due to several changes to the Essbase database, users may notice differences in the reported values in Execution statistics reports, but the resulting data values should be the same as they were before the latest update was applied. When Test instance calculations results indicate that the cells updated differ compared to Production results for the same rule, users should take it one step further and validate the data values.

2.  New Model Menu

Rule build and maintenance tasks can now be performed in the new “Models” menu.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 1

The new Models menu is aimed at simplifying the way we manage Profitability and Cost Management applications Model data – Rulesets and Rules. All administrative tasks are grouped for a more streamlined interaction with PCM – from building rules to executing them and, finally, monitoring jobs – a simplified menu that reflects a real-life workflow. This GUI update will enhance the user experience as there is no more need to jump between different sections of the menu in order to perform end to end activities.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 2

3.  Designer Menu:  2 in 1!

Within the Designer tile there are two tabs covering functionality that were previously accessed via two separate menus:

  • the pre 19.08 Rules menu –called Waterfall Setup in the Designer menu
  • the pre 19.08 Calculation Express Editor menu – called Mass Edit in the Designer menu

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 3

Combining the previous two menus is a beneficial move as it groups logical actions within one location.

Users can set up new rules in the Waterfall Setup tab and can perform mass changes such as replace or add selections in multiple rules in the Mass Edit tab.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 4

4.1  Increased Transparency During Rule Build

Prior to the 19.08 update, in order to check who was the author of a rule, date and time the rule was created as well as last updates, users would have to exit the Rules menu and use the information in the Calculation Rules menu instead. That is no longer the case in the new 19.08 Designer menu. Every rule now displays all this information immediately as it is selected.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 5

There are also updates to the GUI for the two types of rules in PCM.

4.2.  Standard Allocation Rules

The pre-19.08 Management Ledger Rule editor menu for standard allocations is straightforward and easy to use.

There is a tab for each section, indicating from left to right the proper steps for setting up your model allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 6

Who would have thought it could get better?

Well, in v19.08 it did.

The new display combines the Source and Destination tabs into one. This adds to the ease of use as well as transparency of rule setup.  In one look users can now check the core setup of each rule.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 7

The […] button on the right side of each row/ dimension selection has 3 menu options:

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 8Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 9

  • Users can type in multiple member source selections. Member name validation is not dynamic; it is performed only when saving contents. The menu does not lend itself to bulk copy and paste from a text editor – each record will have to be copied on its unique row.
  • Calculation segmentation – this was a feature that was present in prior versions of PCM. The advice here is to utilize this feature only when requested to do so by Oracle Support, when dealing with large applications. As per the Oracle Admin Guide for PCM, “it activates a way of calculating specific dimensions and levels to enhance scalability with very large models”.
  • Clear [Dimension] Selections – the previous menu’s “X” button (see below) would enable users to remove member selections, one at a time.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 10

The clear selections menu option with the latest PCM release enables users to remove all selections with one click.

4.3.  Custom Calculations

Custom calculations have a similar enhancement as the one in standard allocation rules. The Rule menu, available with pre 19.08 versions of PCM, had two separate tabs, one for the Target set of members and a second one for the custom formula:

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 11

In the 19.08 version, the Target and Formula are now collapsed into one  menu.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 12

The new displays for both standard allocation rules as well as custom calculations add clarity and ease of use at the same time, potentially decreasing development and troubleshooting time.

4.4.  Wishlist for Future Designer Menu

If I could choose one feature from the pre 19.08 Rules menu that we could layer in on top of the 19.08 Designer menu, it would be the Text Editor.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 13

A simple feature that is used heavily not just during development, but also during troubleshooting and maintenance.  This feature is, unfortunately, not available in the On-Premises version and it is not part of the Designer menu screen either.

The benefit of this feature is that users can copy information from a text editor or .xls and apply mass updates to a rule for all the dimensions in the Source/Destination/Target screens in one action.

The format of the Text entry is restricted, as each dimension member selection must be typed on a separate line and must include a reference of the dimension that it belongs to.

“Dimension Name”, ”Member Name”

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 14

The format restrictions have not impacted the amount of times users prefer Text Entry to any other type of rule editing menu, especially when they are familiar with the model and the metadata naming conventions.

One other feature that I found useful especially during development or even during initiatives involving structure reorgs, is the selection panel that displays the entire hierarchy for each dimension.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 15

This panel does not appear to exist in the new Designer menu, which means that the user must be familiar with member names or must know all the layers of a dimension. That type of familiarity with metadata comes with time and, as things often change, the user may be constrained to stay constantly up to date with every new modification, as it pertains to updates to the allocation model.

The member selection search box that appears in 19.08 is an improvement, as it is fast and dynamic. However, when multiple selections are required and there is not much familiarity with the hierarchy naming convention, having the entire picture available in one panel is beneficial.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 16

5.  Integrated POV Manager

The POV Manager has been collapsed under the Execution Control menu which ensures a more streamlined management and maintenance.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 17

Users can now create a new POV via the plus sign button.

All other options remain as they were prior to the 19.08 update with one exception.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 18Users were able to manage and update a global context in relation to a POV in the pre 19.08 Rules menu, but this information was displayed alongside the Rulesets and Rules for that POV.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 19

Changing the Global context Dimensions may impact the entire POV set of Rules. Therefore, the configuration of Global context alongside with the POV manager is a logical menu association, part of the 19.08 update.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 20

6.  Increased Flexibility with Model and Data POV

Profitability and Cost Management applications have always had features that enabled fast spin-off of new “What-If” analysis as well as testing different allocation logic rules on the same POV. Oracle has now taken this existing capability to the next level, by enabling users to leverage any Model POV against any Data POV.

Model POV vs Data POVs:

  • The Model POV represents the reference POV that contains the allocation rulesets and rules.
  • The Data POV represents the POV that contains the data values which must be allocated.

In prior versions of PCM, users had to keep Model and Data POVs aligned. If there was a need to test a new set of rules on a data slice, users had to copy the desired rules in the Data POV intersection to be able to launch allocations.

With the 19.08 update that is no longer the case.

The control over which POV is the Model POV is available in the Run Express Calculation menu.

Users can now point to any Reference POV model and run the respective rules onto any other Data POV.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 21

This “run-time association” is not forcing a rule copy, as it would have in prior 19.08 versions.

Because of this new capability, we could potentially have as single set of rules that could be referenced for all our POV’s, without having to copy them across each Data POV every time we would want the allocations to run.

There is one optional parameter that can be called via EPM Automate in order to leverage this Model POV/Data POV reference capability in automated jobs:

epmautomate runcalc APPLICATION_NAME POV_NAME [DATA_POV_NAME] PARAMETER=VALUE [comment=”comment”] stringDelimiter=”DELIMITER

The [Data POV Name] parameter, when specified, enables this pivoting capability between Model and Data POV. If not specified, the automation will assume that the Data POV is the same as the Model POV.

7.  Launch Multiple POV Allocations with One Click

The new 19.08 menu enables users to launch multiple allocations for different POVs at the same time, while also leveraging the functionality of a reference POV described in the previous section of this blog. Through simple check boxes users can select one or many POVs to run allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 22

The 19.08 update Execution Control panel will indicate whether a POV already contains its own rulesets and rules (Model data), so the users can decide if they want to leverage the existing Model associated with Data POV or a distinct Model altogether.

If a reference Model POV is used instead of the Data POV corresponding rules, the Execution statistics report will record that point-in-time reference.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 23

If users select multiple Data POVs to run at the same time, it is important to mention that if a Model POV is selected as reference, it will be applied to all the Data POVs selected to be calculated, whether these Data POVs have Model information (Rulesets and Rules) of their own or not.

8.  Embedded Search Capability in the Execution Control Menu

When launching a single allocation rule, either during development or troubleshooting, the Calculation menu in versions prior 19.08 would have constrained users to scroll through the mass of rules until they found the rule they needed to launch. In larger models, this situation would soon become frustrating.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 24

With the new 19.08 update, users have the possibility of performing a fast search by simply typing portions of the rule name. A dynamic filter is applied and only the rules with that specific string will become available in the drop-down selection.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 25

9.  Easy Access to Job Library

The Job Library is the location of most PCM related logs (except for the Cloud Data Management and Migration logs). With the 19.08 Model menu, it is easier to access it because it shares the same area as the Run Express Calculation menu. The details on the job library will include the reference POV that was used when executing allocations.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 26

10.  Recreate Instance with All File Clear

The Recreate command within EPM Automate enables users to start from a fresh new environment by deleting the existing application and performing a reset on the instance. This month we have an additional parameter that can be leveraged in order to wipe out any existing artifacts, such as backups and other files, that may accumulate and take up significant space over time.

The data storage allowance from Profitability and Cost Management Cloud subscription was communicated back in 2017 to be of 150 G/instance. This is not a hard limit; going over it won’t grind a service subscription to a halt, but over time Oracle may request clients to update their subscription to reflect an increased storage requirement.

*Caution: if you cannot launch some of the EPM Automate commands in your version of this software, you may be running on a prior month release. Upgrade your EPMAutomate utility through the epmautomate upgrade command to align your version of the automation software with the latest PCM version and access all the latest features and parameters added to the library.

Release Calendar for PCM Updates

On the 1st Friday of the month all Test instance will be updated to the current months’ release level, and on the 3rd Friday of the month the same update activity will occur in the Production instance of either PCM or Enterprise Cloud subscriptions.

If for any reason the testing of the 19.08 patch should indicate there are issues, there is a timeframe of reaction when administrators may request to postpone the rollout of the patch to their Production Instance, until the issues uncovered are fixed.

The deadline for such a request is the Weds of the week when the upgrade is scheduled to be applied to the Production environment. In the case of the August Production instance update, that deadline would be on Weds, August 14th. Such Service Request must contain the details of the POD as well as the business reason why the patch update is requested to be delayed.

Conclusion

There are many exciting new updates in 19.08 – from new interfaces aimed at user experience enhancement to backend optimization and new functionality. The Profitability and Cost Management Cloud is being constantly refined based on client and partner feedback, which is why it is so important to become involved in the Oracle community via the Customer Cloud Connect website.

Alex Mlynarzek - PCM v19.08 - 8-5-19 - Image 27

Customer Cloud Connect has become the new space for engaging with Oracle Product Management as well as other members of the community, whether partners or clients.

Create an account today and rate existing Enhancement Ideas if you believe they are beneficial to your user base. The more positive votes, the faster that feature will make its way to your Cloud subscription, based on Oracle’s prioritization list criteria.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

We’ve Got You Covered: Producing Flat-File Extracts out of Cloud Data Management

As an EPM Administrator or Implementation Specialist, we have all had that moment when someone comes to us and asks for the dreaded extract out of an Enterprise Performance Management (EPM) application.  Depending on the system combination (Hyperion Financial Management (HFM), Planning, etc.) and the file layout specifications, this can be tricky.  Layer in the concept of a Cloud application, and things have now gotten real!

In an on-premise installation of Financial Data Quality Management Enterprise Edition (FDMEE), we could use scripting within a “custom application” to build an end-to-end approach for delivering a flat-file extract for third party consumption.  With the release of version 19.06, Oracle has further enhanced this concept and brought it to Cloud Data Management (CDM). The Cloud application now provides the ability to design and produce a text file for downstream consumption in Oracle Cloud products (PCMCS, EPBCS, FCCS).  WHOA!

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 1

The Setup

I recently busted out the functionality and this is what I have discovered:  It’s crazy simple!

  1. Create a text file with your defined headers

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 2

  1. Create a target application and set your settings

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 3

3.  Create an import format

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 4

  1. Create a Location & DLR

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 5

  1. Create the desired Maps
  2. Run the Data Load Rule

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 6

It is that simple!  CDM produces a file that looks similar to this:

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 7

It can PIVOT!?

As crazy as it sounds, it can even pivot the data!  I find this extremely helpful as it is a common request to have twelve months of data in column format.  CDM leverages the PIVOT command of the database for this process and creates the pivot file with ease and efficiency.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 8

What does it do behind the scenes?

Behind the scenes, CDM appears to run a standard import and validation of the data, but it leverages a different set of workflow instructions.  The process does not consider unmapped items which are left as blank fields in the output.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 9

It also does not permanently store any data in the CDM repository unless you want it to.  The documentation can be easily misinterpreted because you will see “fish,” but no data is stored (more on this later).  A quick review of the process details log shows that all the work is done in the “tDataSeg_T” table.  This is the “temporary working” table of Data Management, and it is cleared after/before each new run for optimal performance.  Since the data is never moved out of this table, it is never retained.  Even the export process that produces the output file pulls from the temporary table.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 10

A review of the Documentation shows that there are 3 main supported types for processing:

  • Simple (the option selected here) – Does all the work in tDataSeg_T and does not retain any data or archive maps. Although, be warned, it does retain the process details and “fish” status which can look a bit strange.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 11

  • Full No Archive – Data is retained in tDataSeg only after the import step. Data is deleted after the export.
  • Full – All data is retained. Full process flows are supported (check rules, drill down, etc.).

That’s great, but my file is stuck in the Cloud!

Not really…let’s think this through in a workflow process.  When using Cloud applications, we might have an automation wrapper or a larger workflow process.  If not, we are using the general user interface (GUI), and we can access the file in two ways:

  1. Data File Explorer
  2. Process Details -> Download File option

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 12

If we are using a more automated approach, we just need to include additional steps to:

  1. Monitor the data load rule for completion
  2. Verify the status of that completion (do not proceed forward if it failed; do something different)
  3. Confirm that the file was created
  4. Download the file that was created
  5. Continue the automated routine

In Summary…

It is simple to produce a file using Data Management in the EPM Cloud products.  This is a welcomed change that further enhances the product lines by delivering on client needs.  This allows us to build a simplified Cloud solution that was previously only on-premise.

If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

EDMCS and Data Governance – Part 1

Ahh… February. An interesting month with a variety of happenings. From the significant – Black History Month and President’s Day, to the exciting – the Super Bowl…well sometimes. From the romantic -Valentine’s Day, to the silly – that tenacious ground hog trying to find his shadow…AGAIN. Not to mention that Spring is just around the corner and brings us the glorious event known as “March Madness!”

Why am I babbling about February? <segue> Because it is also the month that introduced Data Governance and Collaborative Workflows with the release of Enterprise Data Management Cloud Service (EDMCS) v19.02. <segue>

As we continue this journey to Enterprise Performance Management (EPM) Cloud, the addition of Data Governance to EDMCS is a major step forward, especially for those of us who have worked with the classic on-premise solutions (Data Relationship Management (DRM) and Data Relationship Governance (DRG)) and who have been awaiting a similar offering in EDMCS to support our Cloud clients. From what I’ve seen so far, a major gap between DRM/DRG and EDMCS has been addressed with this release.

In this blog series, I’d like to further explore Data Governance in EDMCS. At a high level, this is how I see this series unfolding:

  • Part 1 will provide the foundation, background, and basic concepts for EDMCS and Data Governance
  • Part 2 will get more into the “techy” stuff and dive deeper into Approval Policies and Security
  • Part 3 will provide a recap and closing thoughts/lessons learned

So, with that said, onto Part 1…

Prerequisites

Before diving head first into configuring Data Governance and collaborative workflows in EDMCS, there are a few things to consider.

  • Don’t forget people and process. I’m a big believer that people and process are just as (and usually much more) important as the tool. Please refer to this blog post for a quick read on this: The Data Governance Triple Crown.

I believe the same tenets apply to EDMCS and that it’s important to start thinking about a formal data governance program that includes a charter, executive sponsorship, roles & responsibilities, metrics, and much more. Data Governance can be a challenging cultural shift for many organizations which requires strong change management to handle the inevitable resistance. This is where a formal data governance framework can help.

  • Establish the foundation. As with building a house, it’s important to lay a solid foundation before you install the wiring and plumbing. Build your EDMCS application(s) and dimensions, and populate your primary and alternate hierarchies first. Get the client comfortable with the tool and the content. Then you can start to layer in the workflows.
  • Start to identify the “who” (e.g. the people involved and the roles they will play: who will be submitting requests? Who will be approving? Who will do both?
  • Start to think about the “what.” What applications/dimensions/hierarchies will be governed? What are the use cases and typical scenarios that require data governance? Start to collaboratively mock up and storyboard some typical workflows with the client to visualize how the workflows will function. And don’t try to build a workflow for every possible scenario. Start with the big hitters and low hanging fruit first. You can always add more workflows later.

What’s Included in EDMCS Workflows?

Are you wondering what EDMCS includes as far as data governance functionality? In summary, EDMCS supports:

  • Two types of roles – submitters and approvers
  • Separation of duties – workflows can be configured to prevent submitters from approving their own requests
  • The “four eyes” principle: EDMCS data governance adheres to the principle that requests must be approved by at least two people
  • Default application views and maintenance views: workflows can work with both types of views
  • Subscriptions: workflows can be triggered by Subscription requests
  • Email-based notifications
  • Serial and Parallel approvals:
    • Serial approval means a sequential order of approvals is required. For example, Approver #2 can’t approve until Approver #1 approves, Approver #3 can’t approve until Approver #2 approves, and so on.
    • Parallel approval means the approvals can occur in any order and at the same time.
    • With either method, all approvals must occur before the request is committed.
  • Configuration of Reminder and Escalation intervals
  • Multiple Workflow Stages:
    • Submit – initiate the request and add/edit/delete line items in the request. Note that with the 19.02 release, you can also attach documents and insert comments at the line item level. These enhancements are helpful to attach policies, supporting details, and other documentation related to the workflow request.
    • Approve – similar to DRG, an approver can approve, push back, or reject a request. Pushing back will send the request back to the submitter for additional changes. Rejecting will close the request and end the workflow.
    • Commit (implied) – once the request is fully approved, it is committed, hierarchies are updated, and the request history can be viewed like any other request.
  • Approval Policies – this is really the brains of how workflows are configured in EDMCS, and the next blog post cover this in greater detail. But here is a screenshot of the Approval Policy screen showing the available options:

Kevin Black - EDMCS and Data Governance - Part 1 - 3-8-19 Image 1

Conclusion

I hope you found this blog post helpful as an introduction to EDMCS and data governance, and that you will keep reading as the rest of the series is posted. Please contact me with any questions and comments!

And don’t forget to follow me on Twitter (@kblackEPM) and check out/subscribe to my blog (along with the blogs authored by my very talented colleagues at Alithya).

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 2

https://ranzal.blog/author/kblackranzal/

https://ranzal.blog/

Interested in better understanding EDMCS, the RESTful API, and Cloud Data Management? Be sure to check these excellent blog posts by Tony Scalese, aka FDM Guru: https://ranzal.blog/author/ascalese/

Looking for an outstanding resource for all things master data-related and more? Look no further!  https://datarestless.com/

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.

 

Retro Reboot #1: Set It & Forget It – Scheduling FDMEE Tasks

As with most nostalgic items, reboots are the next best thing. From video game consoles to television shows, they are all getting a modern facelift and a new prime-time seat on television.  I have jumped on that band-wagon to revitalize a previous post authored by Tony Scalese: Set it & Forget It – Scheduling FDM Tasks.

As with most reboots, there must be flair and alluring content to capture old and new audiences. Since Oracle Financial Data Quality Management Enterprise Edition (FDMEE) has been in the Enterprise Performance Management (EPM) space for a while and has moved into the Cloud, this is a great time for its reboot!

Oh Great…A Reboot. Now What?

Scheduling tasks in FDMEE has never been easier. Oracle provides several ways to do this for a variety of out-of-the-box activities.  Is there a report that you want to run and email every hour?  Or how about a script that needs to run hourly?  Or maybe batch-automation every 15 minutes?  No worries!  FDMEE can handle all of that with out-of-the-box functionality.

Let us pause for a moment and determine what is needed to make this happen:

  1. Is there a business case and justification for what is about to be scheduled?
  2. Who benefits and how will they be notified of the results?
  3. Is there a defined frequency for which the activity must take place?

Getting Started

First, understand that the scheduling for FDMEE is built directly into the Graphical User Interface (GUI) anywhere you see the “SCHEDULE” button. Unlike the previous FDM counterpart which had it as an independent utility to be installed/configured, the ease of having it via the Web has removed some complexity.

A word of caution:  while this screen allows items to be scheduled, there isn’t a screen that shows “what has been” scheduled.  To do that, access to the Oracle Data Integrator (ODI) is needed, but more on this later.

Initially, the screen shows the types of schedules that can be created and their relevant inputs.

Retro Reboot Screen Shot 1

Below is a reference guide to outline FDMEE’s scheduling capabilities.

Schedule Type Inputs Notes / Examples
Simple TimeZone, Date, HH:MM:SS, AM/PM Single run based on the specified inputs.

 

Example:  Run 08/02/2018 @ 11AM

Hourly TimeZone, MM:SS Repeatable run at the specified time MM:SS time.

 

Example:  Run every hour, at the 22minute mark.

Daily TimeZone, HH:MM:SS, AM/PM Every day at the specified time.

 

Example:  Run every day at 11AM.

Weekly TimeZone, Day of the Week, HH:MM:SS, AM/PM Every specified day at the specified time.

 

Example: Run every Monday thru  Friday at  11AM.

Monthly
(day of month)
TimeZone, Date, HH:MM:SS, AM/PM Specified day at the specified time.

 

Example: Run on the 2nd day of every month at 11AM.

Monthly
(week day)
TimeZone, Iteration, Weekday, HH:MM:SS, AM/PM Specified interval and week day at the specified time.

 

Example: Run every third Tuesday at 11AM.

Why Does the Job Run Under My UserID?

That is because the system assigns the user’s credentials who created the schedule. What can go wrong with that, right?!  Well, if a user no longer exists or a password is changed, the existing jobs will no longer run.

The following considerations should be observed:

  1. Dedicate a service account that is not being used by an employee to be used for server/automation actions.
  2. This account can be a “native” user; since the account is only used internally for EPM products, having a domain account is not needed.
  3. Non-expiry passwords are best.

 It is Scheduled…Now What?

After the item is scheduled, what really happens? The action executes at the scheduled time!  Actions can easily be monitored via the FDMEE Process Details screen.  Now all the possibilities of scheduling the following can be explored:

  1. Data Load Rules
  2. Script Executions
  3. Batch Executions
  4. Report Executions

Also, as mentioned earlier, there is no way to see the batches inside of FDMEE. For that, information can be retrieved in a few ways.  The easiest way to see what is scheduled is to use the ODI Studio.

The ODI Studio provides details as seen in the screen shot below:

Retro Reboot Screen Shot 2

Any scheduled tasks will be listed under “All Schedules.” Simply double click them to obtain details related to that task.

Retro Reboot Screen Shot 3

Another effective option is to write a custom report that displays the information. My previous blog post, Easy Value with FDMEE Reports, provides further details of FDMEE report options and their value.  This would allow a report to be executed to provide a user-friendly report.

Seriously … What Now?

By now, you may have noticed from the previous blog post Scheduling FDM Tasks – A Second Option by Tony Scalese that the upsShell process is quite handy.  It allows other tools to control the FDM jobs…maybe through a corporate scheduler.  Now that most organizations have a corporate scheduler, the new FDMEE options below must be learned:

Command Purpose
Executescript.bat / .sh Executes an FDMEE Custom Script
Importmapping.bat / .sh Executes an import from text-file for Maps
Loaddata.bat / .sh Executes a Data Load Rule
Loadhrdata.bat / .sh Executes an HR Data Load Rule
Loadmetadata.bat / .sh Executes a Metadata Load Rule
Runbatch.bat / .sh Executes a defined Batch
Runreport.bat / .sh Executes a defined Report

*All files are stored in the EPM_ORACLE_HOME\products\FinancialDataQuality\bin\

In the example below, the command, when launched, executes a Data Load Rule for Jan-2012 thru Mar-2012:

Retro Reboot Screen Shot 4

There still must be a better solution…right? Things to overcome:

  1. What happens if the scheduler is Windows-based and the server is Linux?
  2. How does a separate scheduling server communicate with EPM? Does it have to be installed on each EPM Server?
  3. How can we monitor and get details of a job once it is kicked off?

What Happens if You Don’t Want to Run the .BAT/.SH Files?

You’re in luck! With the introduction of new functionality to FDMEE, RESTful APIs are also now available.  With the RESTful APIs, not only can you execute a job, but you can also loop and monitor for the results.  This enhances the previous .BAT/.SH file routines and provides a cleaner and more elegant solution.

Command Purpose
Running Data Rules Execute a Data Load Rule
Running Batch Rules Execute a Batch Definition
Import Data Mapping Import Maps
Export Data Mapping Export Maps
Execute Reports Execute a Report

*URL construct: https://<SERVICE_NAME>/aif/rest/V1

The below example is just querying for a process:

Retro Reboot Screen Shot 5

The Future…

As Oracle moves forward to enhance the RESTful APIs, many doors continue to open for FDMEE and tool scheduling. At Edgewater Ranzal, we fully embrace the RESTful concept and evolve our solutions to utilize this functionality.  The result is improved support and flexibility of FDMEE and the future of Oracle Cloud products.

Contact us at info@ranzal.com with questions about this product or its capabilities.

Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

In the previous post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, I showed you how to rebuild ARCS down to the Profile Segments to speed things up. This time we’re slowing everything down…

So grab a glass of wine and throw on your Marvin Gaye vinyl because we’re getting it on with ~~automation~~. Oh yeahhh…

The sexiest topic of account reconciliations (didn’t think you’d ever see that sentence, did ya?) consistently revolves around automation. Yes, ARCS provides a central repository. Yes, ARCS is auditable. YES, ARCS shows a traceable workflow throughout the reconciliation cycle. All of these features are highly useful and absolutely a prerequisite to an enterprise worthy solution, but if you want to really grab people’s attention in a design session, start talking about the things they won’t have to do. ARCS provides both out-of-the-box functionality as well as customizable tools that help preparers  focus on high-importance reconciliations rather than spending time on low value-add or monotonous items.

Automation occurs in two areas: outside of ARCS (e.g. data feeds) and within ARCS (e.g. auto reconciliations and rules). Setting up the former enhances the latter. Either Cloud Data Management (CDM) or Financial Data Quality Management Enterprise Edition (FDMEE) can be used to load data to ARCS, albeit in different manners, but how this is accomplished is beyond the scope of this post. This data can be sourced from a variety of general ledgers and sub ledgers/subsystems including Financials Cloud, E-Business Suite (EBS), PeopleSoft, JD Edwards, and even *gasp* Excel (…if we have to…). By automating these data feeds directly from the source, management can be confident in the validity of the data (e.g. accuracy, no manual intervention or “massaging,” live, etc.) and, with scheduling, administrators have one or more fewer task(s) to worry about. The latest application data is up-to-date by the time the office doors open. Additionally, data refreshes can occur multiple times throughout the reconciliation cycle without concern for loss of work. ARCS will only update reconciliations with differences from the last data load and will change the workflow status if data has been modified and needs to be looked at again.

Within ARCS, the “bread and butter” for gaining efficiencies in the reconciliation cycle is through utilizing the out-of-the-box auto reconciliation method property on the Profiles. This will set the conditions under which the reconciliation will automatically change the workflow status to “closed,” allowing preparers to focus on the remaining “open” reconciliations that require attention. Which conditions are available for selection depends on the Format type. Furthermore, this field can be easily updated after-the-fact. Using the Actions pane, this property can be updated to a mass of Profiles based on custom filtering.

Automation in ARCS 1

[Screenshot 10a: The “Set Attribute…” functionality from the Actions pane is a powerful tool that can be used to make mass updates from the user interface.]

 

Automation in ARCS 2

[Screenshot 10b: In this example, the “Set Attribute…” functionality can be used to make updates to the Auto Reconciliation Method property for all Profiles, selected Profiles, or Profiles that fit customized criteria.]

The “Set Attribute” functionality is a powerful tool for making changes across multiple Profiles within the ARCS user interface. In many instances, this is a preferable alternative to extracting the Profiles to a text file to modify offline. Screenshots 10a – 10b show how it can be used to update the Auto Reconciliation Method attribute specifically, but there are a plethora of other attributes that can be updated in this manner.

The last puzzle piece to the trinity of automation is customized rules. Similar to custom attributes, rules can be added in a variety of places within your reconciliations to further enhance and streamline the process for both end-users and application administrators. Attributes, formats, profiles, and even specific transaction types (ex. on Subsystem Adjustments, but not on Source System Adjustments) can contain separate sets of rules.

Automation in ARCS 3

[Screenshot 11a: Rules can be added at a Format level.]

 

Automation in ARCS 4

[Screenshot 11b: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a Format level.]

 

Automation in ARCS 5

[Screenshot 11c: Rules can be added at a specific transaction type. In this screenshot, any rules created here would only affect Subsystem Adjustments and would not affect System Adjustments.]

 

Automation in ARCS 6

[Screenshot 11d: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a specific transaction type level.]

Thus, rules can be used for anything from sweeping, application-wide changes down to differences at a transaction-by-transaction basis, as seen in Screenshots 11a – 11d. If ARCS is a suit, then rules are custom tailoring; they are made to fit your company’s specific needs.

The most common rule I see relates to Auto-Submission (as opposed to Auto Reconciliation). The out-of-the-box auto reconciliation methods previously discussed are set on Profiles and can be used to “close” a reconciliation for the period if the criteria is met. However, sometimes a reconciliation still needs reviewing such as if it is considered higher risk or only during certain periods in the fiscal year. Customized rules can dynamically determine which reconciliations can skip the preparer and be assigned directly to the reviewer, and which are clear to be automatically “closed” for the month (e.g. without approval by a preparer or reviewer). Tailoring rules in this manner still helps the preparers reduce their workload while giving management the confidence that the higher priority reconciliations are being reviewed – the best of both worlds!

No Mistakes with Modularity from “Day 1” to “Day 100”
So, there you have it: the four main manifestations of ARCS’ modularity. While nothing will replace proper planning, ARCS does not permanently punish any application decisions you (or your partner) have made in the past. The tool is able to grow with your company and accommodate your needs as they arise. There’s no reason to pick “today” or “tomorrow” – have them both.

Am I right? Am I off my rocker? You tell me! Answer in the comments below if ARCS’ (or ARM! We haven’t forgotten you…) has been able to accommodate the changes with your company’s growth.

If you like what you’ve read, please consider sharing this article through social media. And let me know in the comments what topic(s) you would like to see covered in future posts.

*Screenshots taken from the patch 1806 release.

Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up

We talked about adding new scope in New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons and modifying your application inside (i.e. changing reconciliation methods) and outside of ARCS (i.e. new data feeds) in Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning.

Today, we’re going to tear it down and rebuild from the ground up.

Let me start with this:  redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

ARCS From the Ground Up 1[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

ARCS From the Ground Up 2[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

ARCS From the Ground Up 3[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

ARCS From the Ground Up 4

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

ARCS From the Ground Up 5[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Join me in the last post of the ARCS modularity series – a real crowd pleaser: Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

*Screenshots taken from the patch 1806 release.

Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning

In the last post, New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons, we discussed how ARCS sets you up to easily add on additional scope to your existing application and scale your solution. However, not all changes are brand new. Clients are often concerned with being pigeonholed based on their “Day 1” decisions. A common question I am asked during a design session is “Can I manually enter this reconciliation today, but create new feeds to automatically load the data tomorrow?” The answer is a resounding YES, and it provides clear added value to the next phase of any ARCS (or ARM) project. It can be a viable project strategy to set up reconciliations using an Account Analysis format on “Day 1” and change to a Balance Comparison format when automated data loads are built on “Day 100.”

Modifications in ARCS 1

[Screenshot 5a: Reconciliation 100-1000 is setup with a Balance Comparison format in Sep 2017.*]

Modifications in ARCS 2

[Screenshot 5b: The previous period’s reconciliation can be viewed in the Prior Reconciliations tab.*]

Modifications in ARCS 3

[Screenshot 5c: Reconciliation 100-1000 was previously setup with an Account Analysis format in Aug 2017. The format of a profile can be changed while maintaining the Prior Reconciliations link.*]

Depending on how this change is made, it is even possible to keep the modified reconciliation “linked” to the previously completed reconciliations even though the Format has changed, such as in Screenshots 5a – 5c. The ease with which ARCS allows you to change Reconciliation Methods (via Formats) gives you the flexibility to not bite off more than you can chew in the beginning of a project.

Changing Reconciliation Methods is often related to new integrations. Moving from the manual “fat fingering” of data to directly loading general ledger and sub ledger balances through Financial Data Management Enterprise Edition (FDMEE) or Data Management combined with the inbuilt auto-reconciliation tools can bring a “quality of life” change for end users as well as added confidence in the data’s integrity. It is always a best practice to pull data from the source. Creating the integration from the general ledger is typically part of the initial scope. The usual candidates for building additional feeds after the first project phase are the sub ledgers related to fixed assets, accounts receivables, and accounts payables. However, the most “bang for your buck” as it relates to what integrations to build depends on your line of business and specific company requirements.*

*Note that adding multiple general ledger feeds introduces additional complexities beyond the scope of this article. Please consult with your Oracle partner before adding to your application.

In some cases, the greatest efficiencies to your existing reconciliation process are gained in utilizing the power of ARCS Transaction Matching. This module is better suited to handle massive data volumes at a transactional level. As an example, instead of performing just a reconciliation of the balance sheet’s intercompany balances in ARCS Reconciliation Compliance at the end of the month, an enhancement to this process could be to perform the daily matching process in ARCS Transaction Matching to clear up issues in real time as they arise. This simplifies the month end’s reconciliation. ARCS Transaction Matching is a powerful supplement to an existing reconciliation system and continues to receive special attention from Oracle as seen with the major release of new functionality in Patch 1805.

Just as there are many ways your company can change, ARCS can be modified to match your needs even in a live application. However, sometimes changes are more fundamental than a bit of tweaking such as in an acquisition or the introduction of a new, company-wide general ledger. Or, perhaps, you are just not satisfied with the solution design. Join me in the next post as we discuss the dangerous topic of redesign in ARCS – what is possible…and what it costs.

In the next post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, learn how redesign IS possible in ARCS.

*Screenshots taken from the patch 1806 release.

Don’t Let Incremental Overtime Plague Your Healthcare Organization!

Get to the Root Cause: Increase Productivity and Patient Care While Reducing Labor Costs

The Causes and Consequences of Incremental Overtime

Incremental overtime may be costing your healthcare organization thousands of dollars unnecessarily and result in decreased employee morale and poor productivity, so it’s important to understand its root causes by gaining the ability to track overtime. A Labor Productivity/Labor Management solution that delivers key analytics provides specific answers to the root causes of incremental overtime.  Common causes include:

  • Early clock-in/late clock-out
  • Inability to complete required tasks by end of shift
  • Shift transition conflicts (i.e. last minute attending to patient needs or handoff not yet completed)

The Solution and its Benefits

A Labor Productivity solution provides data for labor hours so that ratios can be derived based on each organization’s definition of incremental overtime, and this leads to a clear understanding of the root causes of incremental overtime so that corrective action can be taken, including:

  • Ensure management visibility at change of shifts
  • Employee coaching/staff meetings to aid time management skills
  • Provide daily reports/analysis to managers to establish protocol for handling incremental overtime risks
  • Designate a synchronized clock that employees should rely on (i.e. department wall clock)
  • Educate employees on OT authorizations – cite repeated behavior in performance evaluations

Incremental Overtime 1

By addressing the causes of incremental overtime using data provided by a Labor Productivity solution, providers can deliver great patient care while decreasing labor costs by thousands of dollars and increasing productivity.

Incremental Overtime 2.jpg