Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Queries

Welcome back to the Profitability and Cost Management out-of-the-box features series!

Here, you’ll gain insight to fully leverage the features bundled with an Enterprise Cloud Subscription which includes Profitability and Cost Management. The focus of this post is: PCM queries – artifacts that represent or extract data in an easily consumable format.

At the end of this blog post, the below topics should be familiar to the reader:

  1. Define PCM Queries
  2. Queries Use Cases
  3. How to Launch Queries in PCM
  4. Query Options
  5. Data Extract Format
  6. Common Errors and Warnings
  7. Alternate Uses of PCM Queries

*The contents of this blog post are based on the standard Bikes (BkML30) application. Deploying the PCM Demo Bikes application can be achieved via the PCM landing page — “Creating a Sample application button” (from version 19.06 onwards).

1. What is a PCM Query? 

PCM Queries are predefined statements with execution mechanics like Smart View retrievals.

Queries can be launched in one of three ways:

  1. Via the PCM Graphical User Interface (GUI)
  2. Automatically through EPM Automate/REST API commands
  3. Within Dashboards and Intelligence analysis reports (covered in greater detail in a this previous post).

2. Queries Use Cases 

Queries are versatile artifacts that have a list of use cases limited only by the user’s imagination. The most common use cases are:

  1. Data Validation – leveraged both for input as well as post-allocated results. Queries can be created and stored in a PCMCS instance. Their definition is similar to a Smart View query, with Columns, Rows, and Point of View (POV) selections. More details are found in the queries options section. PCM Queries have drill-through capability – applicable only to base level queries, leveraging the Cloud Data Management functionality.
  2. Driver/Adjustment Data Entry Template – while queries do not rise to the capabilities of a PBCS/EPBCS web data entry form, they manage to solve the issue of “directional intersections” in an elegant manner. By defining the base level intersections where driver data should reside and storing that query definition, users avoid the need for offline sheets for data entry.
  3. Refined Data Clear Selection – queries can be leveraged to trigger narrow or specific data clears aimed at replacing partial data sets. During a clear POV action, users can select a predefined query to restrict the clear scope. This feature optimizes data loads enabling users to restrict the replacement of input data to only those intersections that are required to be replaced. Think of it as a predefined FIX statement or a predefined tuple.
  4. Simplified Task Lists  – An example of this capability is explained in detail within the “Alternative uses” section of this blog.
  5. Journal Entry or Data Warehouse export – formatted data exports that can be leveraged as Journals within a GL submission process, in .csv extracts, without any custom formatting functionality like header, footer, record count, date/time stamp, etc.

3. How to Launch PCM Queries 

There are several Graphical User Interfaces (GUI) as well as automation options to launch queries.

1.  Intelligence Menu Section – clicking the query name opens a Smart View connection within an Excel session, prompting users to enter their Cloud credentials. If the Excel session is not terminated, credentials will persist for all subsequent query launches.
Blog Post.Alec Intelligence Menu Section Picture

From the Actions button, users can also launch a direct .csv export of each query. The exported file will be placed within the File Explorer section and is available for download. Users can define the number of decimals they choose to extract – up to a maximum of 7 – and whether or not they choose to perform a base-level export or an aggregated data export.

2.  Manage Queries Section – this menu includes all the capabilities found within the Intelligence menu section along with the ability to edit, delete, or create new queries.

3.  EPM Automate Command – if the desire is to launch a query and generate a .csv file in an automated manner, the requirement can be achieved by executing the following command: epmautomate exportqueryresults APPLICATION_NAME fileName = FILE_NAME [queryName = QUERY_NAME] [exportOnlyLevel0Flg=true]The .csv file generate is very similar to a data warehouse extract file with all dimensions displayed in columns and separated by a space delimiter.

By omitting the queryName parameter, the automation will execute a full base-level data extract of the PCM application in the native ASO format. (non-columnar, optimized for native ASO data load).

Alternatively, if the query must be used in a targeted data clear, the request can be launched via automation:  epmautomate clearpov APPLICATION_NAME POV_NAME [QUERY_NAME] PARAMETER = VALUEstringDelimiter = “DELIMITER”

Example:

epmautomate clearpov BksML 2019_Jan_Actual queryName=BksML_2019_Jan_clear_query isManageRule=false is InputData=false isAllocatedValuses=fasle is AjustmentValues=false stringDelimiter

When uisng targeted data clears, no other parameters can be enabled, such as isManagerRule, isInputData, isAllocatedValue, or isAdjustmentValues.

4.  Rest API Command – just like EPM Automate, REST API is used for automation (lights-out processing). EPM Automate leverages REST API in the background. The difference between REST API and EPM Automate is not the scope of this post; however, one of the main differences between the two is the enhanced logging level available with REST API, which is why implementation partners may favor REST vs EPM Automate.

https://<SERVICE_NAME>-<TENANT_NAME>.<SERVICE_TYPE>.<dcX>.oraclecloud.com/epm/rest/v1/applications/Ex3F3/jobs/exportQueryResultsJob

{“queryName”: “Proftiability – Product”,”fileName”: “ProfitabilityProduct2019.txt”,”exportOnlyLevel0Flg”:”true”}

The syntax for a targeted data clear is the following:

https://<SERVICE_NAME>-<TENANT_NAME>.<SERVICE_TYPE>.<dcX>.oraclecloud.com/epm/rest/{api_version}/applications/{application}/povs/{povGroupMember}/jobs/clearPOVJob

{“isInputData”:”true”,”queryName”:”myQueryName”,”stringDelimiter”:”_”}

4. Query Options 

PCM has a few displays and data extract options that can be stored with the query, and more that can be selected during run time via the GUI or through automation scripts. The settings can be separated into two categories:

1. Optional Query-Store Settings

Option 1: Use Aliases: If not deselected, the member name will be used instead.

Option 2: Suppress Missing data during execution. If not selected,

Option 3: Include Attribute Dimensions

Option 4: Order of columns (ignored during granularity override selected at run-time)

2. Mandatory Query-Stored Settings

Option 5: Column/Row Selection

Each dimension reference must indicate whether it is to be used in the Row, Column, or Point of View (POV). It is possible to save queries with no POV reference, the only mandatory selections being those of Columns and Rows.

Any dimension member selection marked as POV will be displayed either in the POV menu/floating box within Smart View or as the header record content if the POV box is disabled in Smart View.

Blog Post.Alec Mandatory Query-Stored Settings

This is a screenshot with the alternative of the POV box disabled.  Users will be able to see a representation of all dimensions that were a part of the toggle POV box when the POV was enabled.

Blog Post.Alec Mandatory Query-Stored Settings2

The selection of Row, Column, and POV will be bypassed during data extracts, whether launched through the menu or via the EPM Automate or REST API commands. All data extracts will list out the members referenced in the selection for each dimension followed by a single data column.

3. Optional GUI Run – time Settings:

Option 6: Export only level-0 data. This will force the query to produce base-level data intersections for all members where a base level has not already been selected in the query. Depending on the size and granularity of data, the query can take anything between 30 seconds up to several hours. Create multiple queries to support the larger data extracts or define the right level of granularity required for the target system to avoid slow extracts or even failures when exceeding the 5-mil records limit.

Option 7:  Rounding precision – extends to a maximum of 7 decimals.

Blog Post.Alec Optional GUI Run-Time Settings.png

Generating data with an increased number of decimals should be paired up with the Application setup of decimals detail as there is no point in generating a data extract with 5 decimals when the application is configured to only support up to 2. This configuration option is available in the Application menu and can be revisited and updated at any point in time.

If neither of these two optional GUI run-time settings are selected, the report will pull the level of granularity established within the query, whether setup at base level or at aggregated level intersections.

4. Optional Automation Settings: 

Option 8: Changing the precision of data extracts when launching queries via EPM Automate or REST API can be achieved via the parameter roundingPrecision with values ranging from (-6) to 7. By default, the EPM Automate exportqueryresults will extract data values with 2 decimal characters. Consider whether or not extracting data with multiple decimals is required, especially if the application Allocation Precision parameter has not been set to higher than the standard value of 2 decimals.

Blog Post.Alec Option8

5. Data Extract Format 

The Smart View query extract format will stay constant regardless of the choice of menu where it is launched.  The .csv file format; however, has a few variances depending on the options selected either during build or during run time.

The .csv format file generated will lose references to POV/Column/Row. As mentioned previously, the resulting file will look like a data warehouse extract – very similar to what can be achieved via an export script within a Planning Cloud Business Rule or Essbase export calc script.

If end users choose to perform a base-level extract override during run time or through Automation commands, the .csv extract will lose the predefined order of the dimensions setup in the query definition.

Regardless of the level of the data extracted (upper or base level), all members within a .csv file extract will be enclosed in double quotes. The delimiter will be “tab” and cannot be overridden or replaced from within the PCM GUI.

This is an example of the query “Profitability – MultiDimension” that is available with the Demo Bikes model. The query extract was launched via the GUI with 7 decimals and with no granularity override (no base level extract option selected at run-time):

Blog Post.Alec Data Extract Format

This is an example of the same query “Profitability – MultiDimension”– launched with 7 decimals and base-level members selection/override at run-time:

Blog Post.Alec Data Extract Format2

When comparing the above screenshots, the order of the columns was clearly altered. This is due to the base-level override selected during run time, and it is an important detail in case the .csv file must be used as a data feed to an external system.

6. Common Warnings and Errors 

Although this section does not represent an exhaustive list of errors, it covers the most common query-related issues a user may encounter along with the corresponding solution.

Warning message: “Query has invalid members. Save the Query to permanently remove the invalid references. To validate the entire Point of View, go to Model Validation without saving.”

Blog Post.Alec Common Warning and Errors

This is a generic message that can indicate either a warning or a true error.

Potential causes for this warning message: if row selections represent top-of-the-house (or so called Generation 0) members; in other words, the Dimension name, while queries may run and produce results, the warning will pop up every time the query is launched via the GUI.

In order to fix this warning, the row selection must be made on any other member or subset of members that is not referencing the Generation 0 / top-of-the-house member.

Second cause for the same warning message: a true error resulting from a member referenced in a query that has either been renamed or removed from the application. In this case, the query is pointing to a Generation 0 / top-of-the-house member, but that selection was not intentional. In most cases when this warning occurs, the obsolete member name reference was automatically removed and either replaced with the top level of the corresponding dimension or simply left blank:

Blog Post.Alec Second Cause for the same warning message

If the user wants to validate which reference was removed due to a metadata update, there is an option to run Model Validation* reports on queries to find out more details:

Blog Post Alec Model Validation 1

*More details on the Model Validation tabs and options will be covered in a future blog post.

In this example, the member STAT1201 has been renamed as STAT120. Because the query references STAT1201, the user is prompted to renew the Account reference selection within the query.

CAUTION: if the user receives this warning and saves the query before launching the Model Validation report, the previous member selection reference (which is now obsolete) is removed and replaced with the Dimension top member. This means that short of restoring the query from a prior snapshot, the user will no longer have a prior reference of the member that has been removed.

The number of query result cells exceeds the limit set by the QUERYRESULTLIMIT.

Blog Post.Alec Errors

Reference the advice given in Option6 (Optional GUI Run time settings section) to reduce the size of your query. This query limit cannot be manually updated by end users or administrators of the Profitability application.

7. Alternate Uses of PCM Queries 

One of the long-awaited features in PCM is the ability to create forms, menus, and task lists like those within Planning and Budgeting Cloud applications. In the absence of such features (which should be coming in future updates), queries can become an easy-to-use alternative. In the prior sections, we explored how queries can be leveraged as data entry guidance mechanisms like forms, data extract tool, and narrow-scope data removal tool. The one feature not discussed yet is the Task List alternative.

By creating predefined queries and listing them in a specific sequence, administrators can dictate the order of operations for either the setup/data load/pre-allocated values or the validation of a PCM model after the allocation process was completed. The listing below is very similar to the concept of Task Lists in PBCS/ EPBCS, and while PCM administrators cannot customize this list by user ID, it still offers that step-by-step guidance that an end user may find useful.

Blog Post.Alec Alternate Usues of PCM Queries

A few tips to keep in mind when leveraging queries in a Task list format:

  1. There are a limited number of characters for each query name. The limit is clearly enforced when editing the name of an existing query. Users can go beyond the limit of number of characters when building a new query from scratch, but this results in an ADF interface error message, and it is most likely a bug which will be addressed in future releases.
  2. The order of the queries is based on the name – descending or ascending. There is no option to customize query order (similar to up/down arrows that allow us to move tasks in Planning or PBCS). This restriction forces the naming convention to be similar to the above example.
  3. There is no option to restrict access at the query level. There are security restrictions/data grants that can be set up for each user to restrict access to the data within the PCM app, but there is no restriction to disable or not display a query or list of queries.

8. Conclusion on PCMCS Queries 

PCM Queries offer a wide array of functionalities that enable users to interact with PCM data throughout the entire processing cycle.

Queries can be used as Data Entry Form to define data entry templates for drivers, or for adjustments when launched via Smart View. By storing the intersections where data is expected to be entered in a query, end users don’t have to worry that they are sending values to the incorrect intersection. Once the query is saved, it can be leveraged multiple times. The references via formulas or hierarchy relationships (Parent, Descendant, Level 0, etc.) will dynamically build the latest metadata selection with each query launch, eliminating the risk of not submitting a driver value because an intersection was not displayed in a Smart View selection.

Queries can also be leveraged as data export mechanism for target systems. The .csv format extract at base-level or upper-level, in column format, can be consumed by most, if not all ETL tools.  There are certain restrictions with queries for data export such as number of records that can be extracted at one time as well as considerations with using dynamic member references. In such cases, the ASO Essbase reporting leading practices must be dusted off and put to good use. What is common sense in the ASO Essbase world should be a good benchmark in the PCM world as well.

Predefined queries are also a good use of resources when troubleshooting allocation results or analyzing data. While PCM queries do not have prebuilt intelligence capability to call out differences month-on-month – a feature that is present in Account Reconciliation Cloud Solution – they can support validation and troubleshooting efforts after the PCMCS allocation process is completed.

The “Out-of-the-Box” series is slowly closing the list of items available in the PCM Intelligence screens, but we are not done yet with all that PCM has to offer!

Keep a close eye on this space for future posts on Cloud Data Management, Model Validation, and Backup and Restore features within PCMCS.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

The Seven Challenges of Capital Programming and Overcoming Them

At Alithya, our Higher Education, Healthcare, and Transportation clients often ask for help streamlining their capital request, prioritization, and approval systems.   As capital projects become more expensive with investments in modern facilities and equipment, many teams are looking to optimize their capital investment processes and making sure their high-profile, high-spend, high-impact investments are well managed with good stewardship.

The numbers are LARGE!  Below are some 2017 capital spending statistics according to the US Census:

  • Higher Ed – $36.7B a year
  • Healthcare – $104.9B a year
  • Transportation – $110.1B a year

With capital spend growing, organizations ask for help to revamp capital programming when they see their annual capital portfolio reach a 100+ capital requests per year, culminating in $100M+ of capital spend per year.  When capital programs get to this scale, organizations can no longer effectively manage portfolios merely utilizing Excel and Email.  If your capital program is locked in Excel, it’s too easy for the organization to make the wrong investment or overspend on in-flight projects.

When capital spend hits this level, we see the following recurring themes:

  • An opaque capital investment pipeline
  • Cross-functional teams are not involved early enough in the capital request process
  • Capital requisition process is annual (not real-time)
  • There is little linkage between capital requests and the organization’s capital constraint
  • Capital spending not linked to the organization’s strategic plan
  • Incomplete capital requests and surprise project overruns
  • The process is too manual – utilizing Excel, Word, and Email

Challenge #1:  An Opaque Capital Investment Pipeline

Treasury and Finance have limited visibility into the longer-term needs of departments and business units.  Finance only gets an annual glimpse during the capital request cycle when hundreds of requests surface and need to be quickly prioritized.  It’s hard to really understand, prioritize, and guide in a time-compressed period.

Ideal State: Departments continuously enter their capital needs into a sandbox, so departments can begin thinking strategically and build their own “mini investment roadmaps.”   During the capital cycle, a department can cherry pick select requests and submit them for approval.    If they’ve been working on their capital needs throughout the year, the capital prioritization process will be much smoother.  At the same time, Treasury and Finance can aggregate sandbox information to look for themes, trends, and emerging needs.

Challenge #2:  Cross-Functional Teams Find Out About Proposed Investments Too Late

Without a system helping to drive the capital program process, departments don’t always know who to involve in various capital requests and when to involve them.  Often, support functions like IT, Facilities, or Purchasing get notified to ‘start work’ on a project into which they had no input or visibility.

Alithya has recognized a few real-world examples:

  • A department receives approval to purchase a piece of equipment, but no one realized the Facilities Department needs to build out a room.
  • Two departments are soliciting approval to purchase the same item, but the Purchasing Department finds out too late – missing an opportunity to take advantage of better purchasing terms.

Ideal State:  As capital requests are promoted out of a department’s sandbox, cross-functional teams get visibility to new requests via dashboards that highlight new, relevant requests.  If there is a question about something new on the list, they can raise concerns, or acknowledge that they have seen the request.

Challenge #3:  Capital Request Process is Not Real-Time

Too often, the capital request process runs annually, a capital program with a list of projects gets approved, and the fiscal year starts a few months later.   Often, business units, functions, and VPs are left to self-administer within their capital allotment contingencies.  However, much like the previous two challenges, the rest of the organization is not getting a real-time view of the current capital need.

Ideal State:  An annual request process establishes each function’s routine capital contingency.    A system should administer a standard request workflow for ‘routine requests.’   As the fiscal year progresses, routine investments should be requested and adjudicated, and then communicated to cross-functional teams so they are afforded the opportunity to review items that may impact them.

Challenge #4:  Little Linkage Between Capital Pipeline and Organization’s Funding Capacity

As organizations think about capital improvements several years out, most do not have a formal methodology to look at the various capital structure and funding scenarios that drive their capital capacity.

Ideal State:  Implement a capital funding capacity model to evaluate various scenarios in order to set the capital program budget. The typical scenario modeling includes analyzing options such as adjusting fees, fund raising, issuing debt, or drawing from endowments.   This scenario model will drive the capital budget limits within the annual request process and allow the treasury function to drive its capital raising plan.

Challenge #5:  Capital Spending Not Linked to the Organization’s Strategic Plan

Organizations need to make important decisions around large capital projects that will impact their finances for many years into the future.  However, many organizations do not integrate their capital plan into their strategic plan. This prevents them from forming a holistic understanding of the crucial impact these decisions have on their future financial statements and key metrics.

Ideal State:  It is important to provide a linkage into the strategic plan so that senior management can gain insight into how capital allocation decisions impact future cash flows and requirements for additional debt or bond issuances.  Incorporating debt covenant calculations and credit rating metrics into the strategic plan can provide further understanding into the organization’s future debt and capital capacity.  Integrating scenario analysis into the strategic plan around the inclusion/exclusion of unapproved capital projects and the timing of the cash outlays associated with these projects can help organizations optimize their mix of capital projects given their financial projections and associated constraints.

Challenge #6:  Incomplete Capital Requests and Surprise Budget Overruns

Often, capital spend is not prioritized and managed consistently leading to the wrong projects being approved, projects going overbudget, and not all capital costs for a project being estimated.

Ideal State:   Define a consistent capital request, prioritization, and approval process that can be universally applied to the organization.   Establish rules around how ‘routine capital’ and ‘strategic investments’ should be facilitated by a formal request process, an opportunity to prioritize, and an approval workstream within the organization. Determine when in the prioritization process, cross-functional teams such as IT, Facilities, and Purchasing should receive visibility into a potential investment.

Challenge #7:  The Process is too Manual – All in Excel, Word, and Email

Even the largest organizations are managing the capital request process via Excel, Word, and Email, with Finance building up spreadsheets of requests.  Once the capital program gets prioritized, approved, and project budgets issued, ongoing projects are often managed in Excel with finance managing budget transfers between projects, actuals vs. budget reporting, and placing an asset in service.

Ideal State:   A capital program system should download cash commitments and cash disbursements from the ERP nightly, and at a level comparable to approved project budgets.   If a project is going overbudget, a department should submit a new incremental funding request so the organization can formally decide to continue to fund or terminate the project.  Similarly, as projects near completion, a system should automatically flag projects to close.

Alithya’s Solution

We are focused on helping clients with this key business process and have established Alithya’s Capital Portfolio Planning, a repeatable implementation that shortens the time it takes to address the challenges outlined in this post. By using the flexibility of Oracle’s EPM Planning tool, we can quickly implement a robust capital programming process.

David Pabst - The Seven Challenges of Capital Programming and Overcoming Them - 8-27-19 - Image 1

To learn more about streamlining your capital programming process, please read our solution brief or contact Alithya to request our capital programing assessment.

Contributing to this post are David Pabst, Practice Manager, Capital Planning Product Manager, and Andy Starks, Practice Director, Strategic Modeling Architect

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

Alithya’s PowerShell Accelerator for Ground-to-Cloud EPM

Oracle provides a powerful toolset for interaction with the EPM Suite through a set of REST APIs and a downloadable product called EPM Automate that provides for command line access to a significant portion of the REST APIs.  At many of our customers, we implement scripts to orchestrate and schedule processes.  These processes execute EPM jobs, transfer metadata and data to the EPM Suite, and download data from the EPM Suite.

We believe in standards to facilitate common implementations across our large customer base and to manage the evolving nature of Oracle’s EPM Suite.  Standardization improves our ability to support our customers, and we have taken steps to consolidate our scripting into a single preferred service accelerator that provides high-quality, implementation-proven script utilities in a packaged delivery.

When we started this effort, we established a set of criteria for what we wanted to accomplish:

  1. Provide scripts that work in either Windows or Linux environments.
  2. Apply scripting best practices in a packaged delivery.
  3. Improve quality, delivery performance, and supportability by having a set of scripted functions that are unit tested prior to first use at a customer.
  4. Provide a standard approach to setup of jobs including signing into EPM Automate.
  5. Provide a standard logging framework.
  6. Provide exception handling including emailing.
  7. Provide a standard approach to archival of transferred files.
  8. Provide a standard approach to post procedure clean-up of temporary files.
  9. Provide ability to run scripts individually and together.
  10. Allow the calling scripts to be easily readable.

Why PowerShell?

Establishing the programming language was foundational and involved conversations about batch, Bash and PowerShell.  Although each language has advantages and weaknesses, the Product team selected PowerShell for the following reasons:

  • Robust interpreted programming language that includes standard capabilities such as variable, functions, loops, exception handling, etc.
  • Native on Windows environments with a nice development environment, PowerShell-ISE.
  • Can invoke commands and batch scripts easily.
  • Intended future direction for Microsoft with strong on-line support.
  • Open-sourced and available on Linux, testing showed that little or no modification to scripts is required for use in Linux environments.

What are we Providing?  A Working Example

We provide a packaged set of utilities called EPMAutomatePowerShellUtilities as an accelerator to development of ground-to-cloud scripts.  EPM Automate is in the name because we primarily use EPM Automate to accomplish an action, but also use the REST APIs when EPM Automate does not provide the required action.

To highlight the accelerator, lets document a working example with a customer implementing a Profitability and Cost Modeling Cloud Service (PCMCS) solution.

Customer is providing dimensional data and content data files and needs the following actions:

  • Upload Dimensional Data and integrate into PCMCS
  • Upload Content Data and Run Allocations
  • Download Post Allocated Results
  • Run all the above as a Single Script

First, the Boiler Plate

All customer scripts have the following boiler plate to provide common behavior

try

{

  • $PSScriptRoot/config/properties.ps1
  • $epmautomatepowershellutilities/Utilities.ps1

    Pre-Job-Run $Profile

    #Place your actions here!

}

catch

{

    Email-Exception

}

finally

{

    Post-Job-Run

}

What is going on?

  • try … catch … finally – allows for exception handling and script resolution in a common pattern. Standardized exception handling improves the quality of the ETL process by ensuring that support personnel are notified via email for any process execution stoppage.
  • – $PSScriptRoot/config/properties.ps1 – loads the variables required to run the scripts. For example, we load $ApplicationName which is the PCMCS application with which we are working.  The properties.ps1 is a text file that requires very little maintenance after initial setup.
  • – $epmautomatepowershellutilities/Utilities.ps1 – loads all the custom functions we provide.
  • Pre-Job-Run $profile – sets up job and makes it ready to run including signing into EPM Automate.
  • #Place your actions here! – this is where the custom actions are placed. See Scripts 1, 2, 3, and 4 below for examples of custom actions.
  • Email-Exception – when an exception occurs, then email an error message including a zip of the temporary folder that contains process log and any other files that were created by custom actions.
  • Post-Job-Run – clean up after custom actions are complete by signing out of EPM Automate and optionally removing temporary folder (configurable).

Script 1: UploadDimensionData.ps1 – Upload Dimensional Data and Integrate into PCMCS

We won’t repeat the boiler plate and focus on the custom actions:

Upload-DimData-And-Load $ApplicationName “$inboxFolder\Dimensional Data”

Enable-App $ApplicationName

Deploy-Cube $ApplicationName -KeepData -RunNow

Readability is a huge factor here.  We really don’t need to explain what these custom actions are doing, but let’s highlight a couple of things.  First, the called function often looks a lot like a corresponding EPM Automate command; for example, “Enable-App” corresponds to the EPM Automate command “enableApp.”  Second, we provide more complex calling functions such as “Upload-DimData-And-Load” to perform a set of common actions that run multiple commands – in this case the upload of multiple files – and then run the loadDimData command for all the uploaded files.  Behind the scenes, an archive copy with a timestamp is placed in an archive folder for each of the uploaded files.

Script 2: UploadData.ps1 – Upload Content Data and Run Allocations

Again, without boiler plate:

Clear-POV $ApplicationName “VR_Working;SC_Forecast” -InputData -AllocatedValues -POVDelimiter “;”

Copy-POV $ApplicationName “NoVersion,SC_Forecast” “VR_Working,SC_Forecast” -isManageRule

Upload-Data-And-Load -ApplicationName $ApplicationName -Path “$inboxFolder\data” -DataLoadValue “OVERWRITE_EXISTING_VALUES”

Run-Calc -ApplicationName $ApplicationName -ModelPOV “VR_Working;SC_Forecast” -ExeType “ALL_RULES” -ClearCalculated -ExecuteCalculations -RunNow -isOptimizeReporting -POVDelimiter “;”

You’ll see a mix of EPM Automate analogs and a complex function that uploads all the content data and loads them into PCMCS.  Again, the archival of uploaded files occurs during the Upload-Data-And-Load function.

Script 3 – DownloadResults.ps1 – Download Post Allocated Results

The custom actions are:

Export-Query-Results $ApplicationName “PCMCSDataExport.txt” “Post_allocated”

Download-File “profitoutbox\PCMCSDataExport.txt”

Script 4 – JustDoIt.ps1 – Perform all Three Steps

The boiler plate is built so that the Pre-Job-Run, Email-Exception, and Post-Job-Run understand when they are inside a calling script.  This allows you to create a parent script to run multiple other scripts without modification of the called scripts.

Focusing on the custom actions:

. $PSScriptRoot\UploadDimensionData.ps1

. $PSScriptRoot\UploadData.ps1

. $PSScriptRoot\DownloadResults.ps1

In this parent script, EPM Automate is logged into a single time.  Any exception results in full stop and a single email sent with an integrated process log.

Final thoughts

The accelerator provides a high-quality framework allowing Alithya to focus on customer requirements and the actions needed to integrate with Oracle’s Cloud EPM suite.  The low level, expected behaviors, such as exception reporting, logging, emailing, and file archival are available on day 1 of the engagement.  With a focus on readability, these scripts are easily transferred to the support organization for long-term sustainability.

For long-term support, the customer can update the REST API version via the properties.ps1 file, and Oracle is providing EPM Automate updates that do not break prior scripts.  If EPM Automate has a breaking change, the customer can update the utilities themselves or request an updated set from Alithya.  Feedback from our customer base is positive with specific comments about the readability of the utilities and quality of initial deployment.

Overall, the accelerator is reducing the effort and time to deploy ground-to-cloud processes while improving the quality of deployment by reducing the time spent creating and debugging scripts.

Additionally, long-term support costs are lower through standardization of implementation patterns that allow support personnel to focus on what the script is accomplishing rather than how it is accomplishing it.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

We’ve Got You Covered: Producing Flat-File Extracts out of Cloud Data Management

As an EPM Administrator or Implementation Specialist, we have all had that moment when someone comes to us and asks for the dreaded extract out of an Enterprise Performance Management (EPM) application.  Depending on the system combination (Hyperion Financial Management (HFM), Planning, etc.) and the file layout specifications, this can be tricky.  Layer in the concept of a Cloud application, and things have now gotten real!

In an on-premise installation of Financial Data Quality Management Enterprise Edition (FDMEE), we could use scripting within a “custom application” to build an end-to-end approach for delivering a flat-file extract for third party consumption.  With the release of version 19.06, Oracle has further enhanced this concept and brought it to Cloud Data Management (CDM). The Cloud application now provides the ability to design and produce a text file for downstream consumption in Oracle Cloud products (PCMCS, EPBCS, FCCS).  WHOA!

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 1

The Setup

I recently busted out the functionality and this is what I have discovered:  It’s crazy simple!

  1. Create a text file with your defined headers

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 2

  1. Create a target application and set your settings

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 3

3.  Create an import format

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 4

  1. Create a Location & DLR

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 5

  1. Create the desired Maps
  2. Run the Data Load Rule

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 6

It is that simple!  CDM produces a file that looks similar to this:

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 7

It can PIVOT!?

As crazy as it sounds, it can even pivot the data!  I find this extremely helpful as it is a common request to have twelve months of data in column format.  CDM leverages the PIVOT command of the database for this process and creates the pivot file with ease and efficiency.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 8

What does it do behind the scenes?

Behind the scenes, CDM appears to run a standard import and validation of the data, but it leverages a different set of workflow instructions.  The process does not consider unmapped items which are left as blank fields in the output.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 9

It also does not permanently store any data in the CDM repository unless you want it to.  The documentation can be easily misinterpreted because you will see “fish,” but no data is stored (more on this later).  A quick review of the process details log shows that all the work is done in the “tDataSeg_T” table.  This is the “temporary working” table of Data Management, and it is cleared after/before each new run for optimal performance.  Since the data is never moved out of this table, it is never retained.  Even the export process that produces the output file pulls from the temporary table.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 10

A review of the Documentation shows that there are 3 main supported types for processing:

  • Simple (the option selected here) – Does all the work in tDataSeg_T and does not retain any data or archive maps. Although, be warned, it does retain the process details and “fish” status which can look a bit strange.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 11

  • Full No Archive – Data is retained in tDataSeg only after the import step. Data is deleted after the export.
  • Full – All data is retained. Full process flows are supported (check rules, drill down, etc.).

That’s great, but my file is stuck in the Cloud!

Not really…let’s think this through in a workflow process.  When using Cloud applications, we might have an automation wrapper or a larger workflow process.  If not, we are using the general user interface (GUI), and we can access the file in two ways:

  1. Data File Explorer
  2. Process Details -> Download File option

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 12

If we are using a more automated approach, we just need to include additional steps to:

  1. Monitor the data load rule for completion
  2. Verify the status of that completion (do not proceed forward if it failed; do something different)
  3. Confirm that the file was created
  4. Download the file that was created
  5. Continue the automated routine

In Summary…

It is simple to produce a file using Data Management in the EPM Cloud products.  This is a welcomed change that further enhances the product lines by delivering on client needs.  This allows us to build a simplified Cloud solution that was previously only on-premise.

If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

Enterprise Data Management: Version 19.07 – Top 3 Added Features

The latest 19.07 release of Enterprise Data Management (EDM) contains a boatload, a plethora – no, make that oodles of new features (time to put Merriam-Webster away). The full release notes are documented here: EDM July 2019 Update

In this blog, I’d like to dive into 3 of my favorite features (so far) from this release.

#1: Create Request Items from Compare Results

I’m already using this feature at one of my current clients. I’ve always touted the “Compare” feature of EDM. It’s been one of EDM’s best features from the beginning and provides the ability to compare Missing Nodes, Relationship Differences, and Property Differences. The biggest gap?  Previously, when the compare results are returned, there was no way to download or utilize the results in a request to resolve the differences. While you could always fix differences manually with drag-n-drop or direct property updates, when your compare results return dozens or hundreds of differences, that is not a reasonable option.

Well fear no more!  You can now create a request load file directly from the compare results. And it works as easily as you might have hoped it would:

  1. Run a Compare between two viewpoints.
  2. When the differences are returned, click “New Request.”
  3. Notice that new icon? Click that bad boy and a request load file will be automatically generated.

Kevin Black - EDM V19.07 - 7-24-19 - Image 1

4.  Now you can see that the request file has been attached to the request and the request items are added to your “shopping cart.” Make any additional changes, submit your request, and your viewpoints are synchronized! Easy peasy.

Kevin Black - EDM V19.07 - 7-24-19 - Image 2

NOTE

What is interesting is to analyze the request file that is generated from the compare result. You will notice it contains UPDATE and PROP_UPDATE actions. Why those? Well, PROP_UPDATE is used for property differences returned by the compare. And UPDATE is used for missing node and relationship differences. The UPDATE command is quite sneaky and powerful. Not only will it update node properties, but, depending on if the node exists and if the hierarchy set allows Shared Nodes, it will also UPDATE and perform an ADD, INSERT, or MOVE, too. Pretty cool.

#2: Property Editing

This enhancement not only provides property editing capabilities not available previously, but these edits can also be performed directly on the property without going through the App Registration wizard.

From the Properties card, Inspect the property you wish to edit. You’ll notice the Edit button is now enabled. From here, you can modify property default values, make the property editable or read-only, and modify the “Allowed Values” list if applicable.

No more stepping through the App Registration wizard to apply a simple property update!

Kevin Black - EDM V19.07 - 7-24-19 - Image 3

#3: Import/Export of Allowed Values

I’m so happy this feature is now available. My fingers, and my keyboard, thank you, Oracle!

From the same property editing Inspector dialog mentioned above, you can now modify the “Allowed Values” in a pick-list property without stepping through the “App Registration” wizard. Click the “+” sign to add new values manually. Not shown, but also available in the Actions menu, is the ability to delete or reorder existing list values.

But notice there is now an Import/Export capability. This utilizes a basic Excel file for mass upload/download of list values.

This will be a lifesaver for me at another project where I have Smart List and Attribute properties to build in EDM that contain dozens of list values.

Kevin Black - EDM V19.07 - 7-24-19 - Image 4

Below is a screenshot of the Excel file used to populate this property:

Kevin Black - EDM V19.07 - 7-24-19 - Image 5

Before I close, I’m going to cheat a bit and throw a shout out to one more enhancement in version 19.07…

Honorable Mention: Inspector Dialog Sizing

While custom resizing of the inspector dialog isn’t possible yet, the larger Inspector dialog size certainly makes it easier to view your data chain objects and reduces the amount of scrolling required. This is useful, especially when you’re viewing or reordering a bunch of properties in a viewpoint or node type!

That’s it for now. Be sure to check out EDM version 19.07 if you haven’t already. It contains additional helpful enhancements beyond the few I’ve highlighted here. Stay tuned for more upcoming EDM blog posts. If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Traceability

Traceability is the buzz word in any regulated industry. Being able to prove the numbers is crucial to all businesses, but it can be very time consuming and complex for companies that operate across multiple and diverse lines of business with a large pool of Channels, Services, Customers or Products. Shared Services implementations require a clear understanding of the flow of costs.

Where is this cost coming from?

Why have I been charged so much more this month for the same service compared to last month ?

These questions should be easy to answer. Unfortunately, not all profitability analysis technologies are able to support a quick turnaround for providing the required level of detail.

PCMCS has more than one option to easily provide much-needed answers.

The Rule Balancing report is one of numerous out-of-the-box (OOTB) features included with an Oracle Cloud Service subscription able to support data traceability and transparency. For more details about the type of information the report provides and to learn the ease with which it can be set up for your application, review this comprehensive blog post.

Besides Rule Balancing reports, PCMCS OOTB features support transparency within allocations and/or profitability models with Traceability maps.

The focus of the current post is how to access, build, and use Traceability maps.

The order in which I am covering the PCMCS OOTB features is directly related to the Intelligence menu options available in PCMCS.  As a recap, the 6 menu options are listed below:

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views (How to create them, customize them and use them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis (Setup and configuration covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves (Usage and features covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

The contents of this blog are based on the standard Bikes (BkML30) demo application, so you can follow the step-by-step details without having to go through an app setup from scratch. You can load and deploy this application directly from your PCMCS Instance through a couple of clicks via the Application menu using the + / Create button.

Traceability – Intro

The traceability maps, whether in PCMCS or in on-premise HPCM, allow users to graphically visualize the allocation flow. A chosen business segment can be traced through the allocation steps, either backwards or forwards, starting from a predefined point. Images  make up the map of a data point either flowing into the selection of members chosen by an end user to troubleshoot or flowing out of that selection into subsequent allocation steps.

Alex Mlynarzek - Traceability - 5-21-19 - Image 1

Alex Mlynarzek - Traceability - 5-21-19 - Image 2

Traceability is a great tool for troubleshooting specific intersections of detailed data such as base level accounts against a specific department. However, when there is a need to identify patterns or troubleshoot allocation results at a higher level, the Standard Profitability (the first on-premise version of the Profitability module) Traceability maps are not geared to handle such requests. In order to perform a high-level analysis in Standard Profitability models, users would have to revert to Smart View or Financial Reports.

Being able to trace data at a summarized level of detail is the key difference between traceability in Management Ledger applications and traceability in Standard Profitability. Management Ledger allows end users to select the level within the hierarchy where they desire to launch or generate traceability, whether base level or otherwise.

Traceability – Setup

The starting point of any traceability map in Management Ledger is Model Views.  If you are interested in learning how to build and use Model Views, spend a few minutes reviewing this prior post.

List of steps necessary to launch a traceability report in Management Ledger applications:

  1. Select a valid Point Of View (POV). The POV must contain data in order to display any traceability results.
  2. Choose a prebuilt Model View – example: IT Support Activities.
  3. Select a tracing dimension which will represent the detail that is the focus of your analysis (Accounts, Departments, Entities, Business Units, Segments, etc). The selected tracing dimension determines the focus or scope of your analysis and will be the one dimension that is displayed at base level detail or any other generation within the hierarchy.
  4. Trace Forward and Use Generation Selection boxes are selected by default.  Not selecting “Trace Forward” allows users to perform a “Trace Backward” action; in other words, figure out how the model arrived at a data value for a selected intersection, rather than how a data value was allocated out from that intersection to other recipienAlex Mlynarzek - Traceability - 5-21-19 - Image 3

A report with the “Use Generation Selection” filter disabled will display the data at the base level for the Trace Dimension (in this example, Entity).

Note: If a message is received indicating the Flash Player version is not up-to-date, check that pop-ups are enabled on the page to allow the download of the required update.

Alex Mlynarzek - Traceability - 5-21-19 - Image 4

Alex Mlynarzek - Traceability - 5-21-19 - Image 5

If the traceability report does not generate any results, check that the allocation rules were successfully completed for the referenced POV. Alternatively, if the POV calculated is successful, but data is not displaying on the Trace Screen, check that the application variables are correctly setup for Current Year, Period, and Scenario. Also ensure the Account dimension maps are specified in the Dimension Settings screen.

Traceability – Display Options and Filters

Traceability screens have 5 display options:

  1. Vertical (Top Down)
  2. Horizontal (Left to Right)
  3. Tree
  4. Radial
  5. Circle

Within the traceability analysis, users can focus on a single rule. The tracing dimension in the previous example is Entity. The tracing dimension is the focus of the traceability reports – following how data was allocated into or out of a base level Entity.

Alex Mlynarzek - Traceability - 5-21-19 - Image 6

To isolate a specific rule and separate it in a standalone diagram, click Shift+Enter or select the graphical option on the top of the Rule ID box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 7

End users have the choice of displaying the aliases/descriptions of the Entities rather than the code member names. If aliases have not been uploaded in the metadata of the application, then the report will still reference the member name codes, regardless of this choice.

The following traceability report will display how operating expenses are reallocated /redistributed from each support entity (like IT, Facilities, IT, etc.) to the production entities using predefined driver configurations referenced in the Rule box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 8

Select the “Trace Forward” filter and keep constant all other prior selections in the initial traceability screen to display IT Support Activity charge out.

Alex Mlynarzek - Traceability - 5-21-19 - Image 9

The “forward tracing” of IT allocations represents how data is allocated out to consuming departments such as Finance, Marketing, Outside Sales, Assembly, etc.  Remember the focus of the trace screen depends on the “Tracing dimension” selected. In this example, Entity was the tracing dimension.

The top box, R0009, shows us the Rule Name relevant for IT allocations, the ruleset reference, the Driver used to allocate data to Targets – in this case : Desktop Laptop Users, regardless of Activity performed (NoActivity reference) as well as the amount / dollar value of the allocation : Allocation Out 1.338.000.

Users have the flexibility to allocate data partially (to allocate only a % of the total value instead of 100%). That is what the Contribution % reference in the R0009 box represents. In this rule, the administrator/rule designer decided to fully allocate the IT cost to the consuming department instead of allocating it partially. Therefore, the 100% reference is displayed.

In the case of the Bikes ML (Management Ledger) application, the Entity dimension has 4 generations. When talking about generations, the larger number, in this case number 4, represents the lowest level of detail. Generation 0 represents the Dimension name; Generation 1 represents the first set of children; Generation 2 represents the Children of Children, etc.

Below is a radial display of the contribution charge out at base Entity level when no generation selection was made prior to launching the traceability report:

Alex Mlynarzek - Traceability - 5-21-19 - Image 10

We can see in this diagram how much each Target Department was charged for their IT bill.  The contribution from the IT department to each target is displayed as a %.

Change the generation reference from 4 to 3. The higher the number of the generation, the more summarized the detail. The change of Generation reference will result in a summarization of the members of the Entity dimension to one level higher than seen previously.

Alex Mlynarzek - Traceability - 5-21-19 - Image 11

Notice how there is no longer an Entity breakdown at base level as we had in the previous screen when Generation 4 was selected, and the contribution percentages have been summarized to display the contribution % at a node level.

In situations where a dimension has many levels within the hierarchies or an increased volume of base level members, the generation selection proves useful as it allows users to group data sets and display them in the same diagram without compromising the level of detail.

Traceability – Customization

As mentioned at the beginning of this post, PCMCS comes with several features to support traceability and troubleshooting, one of these features being the Rule Balancing report. In situations where the traceability maps are insufficient to support a meaningful conversation regarding bill out values, and a deeper dive into an individual rule is necessary, the Rule Balancing report covers such a request.

While the traceability report has evolved in comparison to the Standard Profitability model, its usage is limited to situations where there is a need to troubleshoot specific data points while also having a visual representation as support.

The most common alternative to graphical traceability reports are ad hoc reports in Smart View, either built from scratch or launched via the Rule Balancing report (described in detail in a previous post).

Conclusion on OOTB features: Traceability

Business segment profitability analysis represents the analysis of operations and profitability of individual segments (e.g. Lines of Business, Products, Channels, Customers, Services) within a company. Business segment reporting requires all costs to be divided into one of the two categories:  direct /traceable costs or indirect/nontraceable costs.

In PCMCS, all costs are transparent and fully traceable. An indirect cost value can easily be traced throughout the flow of the allocation model all the way down to the business segment being analyzed. The indirect allocated volume can be explained through step-by-step analysis, high level traceability maps, and OOTB reports listing out the rules impacting the distribution of such cost.

Using a combination of Model Views, Rule Balancing reports combined with Traceability analysis and Smart View ad hoc retrievals, there should be no doubt regarding the source of a data value within PCMCS. Metric data validation – situations where the intersections for each metric are customized to such extent that building a Rule Balancing report or an individual Model View is not efficient nor effective – is mostly performed via Smart View.

In a nutshell, traceability provides significant benefits:

  • users can trace both revenue and cost based on predefined model views.
  • traceability can flow forward or backward from a starting point.
  • users can review the final contribution % (driver details are not displayed on this screen).
  • users can toggle between different display options and focus on specific rules for focused analysis.

Subscribe to our mailing list to receive updates for new blog posts related to PCMCS Queries, KPIs, Model Validation, System Reports, Data Integration using Cloud Data Management, as well as the OOTB Application Backup and Restore functionality.

Is there a PCMCS-related topic that you would like to see covered in more depth?  Email us at infoSolutions@alithya.com.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Profit Curves

Welcome back to this series of blog posts to cover out-of-the-box (OOTB) features of Profitability and Cost management Cloud Service (PCMCS). There is a need within the Oracle Cloud client community to discover what can be achieved with the tools provided when subscribing to one or more Oracle Cloud Services. A lack of awareness of the features included with your subscription is an unmeasured cost and a missed opportunity to gain much needed insight without further spend.

PCMCS applications – whether built for Fully Allocated P&L Solutions, Transfer Pricing, Shared Services Allocations or Customer/Product Profitability – have OOTB reporting capabilities available via the Intelligence menu that offer insight into allocation models with reduced effort. Here, we’ll explore how to set up, configure, and use such features and fully leverage the functionality that is included in the Oracle Cloud subscription cost.

The order in which I am covering the OOTB features is directly related to the Intelligence menu options available in PCMCS.  The 6 menu options are:

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views (learn how to create, customize, and use them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis (discover how to set up and configure them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves (this blog post focuses on Profit Curves)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

The content of this blog is based on the standard Bikes (BkML30) demo application, so you can follow the step-by-step information without having to go through an app setup from scratch. You can load and deploy this application directly from your PCMCS instance through a couple of clicks via the Application menu using the + / Create button.

 

Profit Curves – What Are They?

If you are looking for a graphical representation for the concentration of your profit by either Customer, Products, Channels, or Funds, look no further than the Profit Curves section in PCMCS. Profit Curves, also referred to as Whale Curves, are used to identify which cluster of Customers, Channels, or Products generate the most profit. Profit Curves display a graphical representation of the relationship between economic profit and the quantity of output sold.

The details of the profit or net income split by unit/service/customer displayed in a Profit Curve identify issues with:

  • expansion of a production line
  • breadth of services that may have a negative impact on profit
  • onerous clients consuming numerous resources without justifying the cost for the profit gained from their engagement
  • potential costing issues of “over” or “under” costing products (for example, overburdening a product or product line inappropriately);  a cost study should be performed to determine the appropriate allocation
  • pricing

Information illustrated with a Profit Curve can be enlightening and help to put the focus on specific customers, products, or channels where the greatest profit attention is needed, indicating situations where a few products, services, or clients create enough profit to maintain the rest of the company’s offering. Profit Curves are key to strategic decision making, especially when dealing with competing projects and limited resources.

During one of my recent PCMCS implementations, a Profit Curve proved valuable when the client’s staple product, advocated as being its best and most profitable, was discovered to be the least profitable after the implementation of an accurate cost allocation methodology in PCM!

The easy-to-follow Profit Curve provides the foundational insight needed to rapidly shift gears across product lines, ensuring alignment of management decisions backed up by real information.

 

Building a Profit Curve

There are several Profit Curves available in the Demo application BksML30. In order to build a Profit Curve, there must be a corresponding Analysis View that can be leveraged as the basis for data selection. See a step-by-step guide on how to build an Analysis View here.

Analysis Views can contain multiple references to Measures and/or Accounts; however, the Profit Curve using the Views analyzes and displays only one measure at a time.  Users can choose to define names for the X and Y axis to add clarity to the Profit Curve information consumers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 1.png

Here is an example of a Profit Curve:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 2

The curve displays a listing of Net Income generated by Customer.

From a Quarter-to-Date perspective (the Period selected at the top of the View), this Profit Curve indicates that all customers are profitable.  That may raise questions about whether or not the overhead is allocated appropriately or an even spread is used, thus skewing the results.

Note: Data in the BksML30 model at the time this Profit Curve was generated was calculated only for January, confirming the Profit Curve display, as the profit by customer distribution was evened out at Quarter-to-Date level.

The details of each customer/product/channel/segment and how much net income each is generating can be reviewed in the Category Analysis section. From a cost management and process improvement point of view, the right side is the most important.  This side generally represents customers/products/channels with a negative profit or that cost the company money.  While these customers/products/channels can’t always be eliminated, they can be watched and reviewed for pricing changes.

Using a PCMCS Profit Curve

There are options to filter data by the POV dimension, Period, or by metrics tied to Customers. For example, we can exclude from the analysis any Customers with Operating Expenses that are considered marginal. After defining the required filters, we can refresh the Profit Curve and review the newly generated pie charts.  Filters can be added to all available metrics and can be stacked up to generate any custom report.

Below is an example of the same “All Customers” Profit curve, limited to January and with a selection of all Customers who had a Net Income smaller than 1 positive unit (USD or the currency defined in the PCM model) thereby highlighting Customers creating losses.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 3

In the Details section of the Profit Curve, there is a count of 886 customers with a Net Income smaller than 1.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 4100% of the customers analyzed based on the specified criteria are unprofitable. The “Actual Profit” in this Details section can be translated into “Actual Loss” as the total accumulated value across the 886 customers is US$ -1,148,670.

If there are doubts regarding the data intersection for the remaining dimensions in the PCM model such as Product or Entity, we can analyze related information through the configuration icon located next to the “Add Filter” menu. These selections are predefined in the Analysis View that was used during the creation of the Profit Curve, and you will not be able to modify them unless you modify the underlying View.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 5

If questions are raised during the analysis on the Profit Curve screen and a list of details by Customer is requested, we have the option to launch a report from the “Analysis Links” menu under the Category section.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 6

A report in the following format will be generated to display the Customer detail records along with all the other settings defined in the Analysis View.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 7

This report can be exported in .xls format (“Export to Excel” option), and it represents a base level data dump report, in column format, containing multiple generations and references to attribute dimensions.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 8

Note: When launching this report, users must check that the parameters have transitioned correctly from the previous screen. The Period parameter, which is saved to be Quarter-to-Date on the original Analysis View used in the Profit Curve diagram, will override any other selection made during run time analysis. If there is a need to revert to a specific month before launching the Export to Excel, users will have to make this update on the Filter /POV area and perform a data Refresh.

We can make changes to the Analysis View to add further details (for example, Cost of Goods).

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 9

For the 886 customers that are not profitable, we can dive deeper into their Cost of Goods data, Operating Expenses, or analyze whether or not the products sold are so heavily discounted that they no longer generate a margin.

 

Pie Charts Related to PCM Profit Curves

 

We can further analyze the resulting Profit Curve data by using the available predefined categories tied to the Attribute dimensions available in the PCMCS application, in the underlying Analysis View displayed in the adjacent Pie Chart.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 10

The available categories to display the Pie Chart data for the Profit Curve chosen are the following:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 11

When selecting the Region category/attribute, we learn that the Southeast area contains 26,07% of all the unprofitable customers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 12

If we change the Focus of the Category to be on Top 10% most unprofitable customers by Amount vs. All Customers/Number of Customers, the following information is displayed:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 13
Alex Mlynarzek - Profit Curves - 4-18-19 - Image 14

The Pie Chart reveals that the Southeast region has the highest number of unprofitable customers both by Number of Customers as well as by Total Amount/Loss.

When adding a filter based on Customer Generation 3 which distinguishes between Department Stores and Specialty Retailers, it looks like 87.64% of the Top 10% most unprofitable customers are from Department Stores.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 15

A look at the 4th generation in the Customer dimension where we can analyze the split of the losses at Customer level indicates that one store is responsible with 65.17% of all losses within the top 10% most unprofitable Customers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 16The Pie Chart is the only artifact that is refreshed based on the selections of the Category Analysis menu while the Profit Curve remains constant based on the selections in the POV and filter criteria.

While all users of PCMCS can generate/launch Profit Curve reports and export their associated Analysis Views, in order to create and set up a Profit Curve report, the PCMCS administrator must update the requesting user’s permissions. As with all Intelligence screens within PCMCS, the Viewer role allows the use of these artifacts, not its creation or setup.

Concluding Thoughts About OOTB Features: Profit Curves

If you have been following the posts in this blog series, you’ve become aware of the dashboarding opportunities at your disposal with a PCM subscription. The listing of PCMCS OOTB features is a good starting point for comparing any other profitability and cost management tools on the market, regardless of vendor and technology employed.

Creating insightful dashboards is now at the tips of end users’ fingers, no longer involving complex requirements gathering processes and iterating between different display options. PCMCS users have the ability to build and customize their own dashboards. As a result, IT staff is no longer burdened with reporting requests or artifact migration between environments.

Subscribe to our mailing list for updates on the next blog post covering Traceability, Queries, and KPIs. Don’t think the PCMCS OOTB features blog series will stop at the Intelligence menu options! There is more to come on Model Validation, System Reports used for maintenance and troubleshooting, Integration with Cloud Data Management, and the Application Backup and Restore functionality. All this and more will be covered in future blog posts, so watch this space for updates.  If there is a PCMCS-related topic that you would like to see covered in more depth, email us at infosolutions@alithya.com.

EDMCS and Data Governance – Part 3

Welcome to Part 3 – the finale – of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release.

Part 2 discusses Workflow Stages in greater detail and dives into the brains of EDMCS workflows – the Approval Policy. Approval policies at different levels of the data chain are explained, and we conclude by building a sample workflow at the dimension level.

In Part 3, I’ll attempt to tie a bow around everything and offer some parting thoughts.

Recap

As I continue to explore and learn about collaborative workflows in EDMCS, these are the key points that come to mind:

  • Emphasize the Fundamentals – No matter what tool you are using, People and Process are extremely important in any data governance solution along with strong executive sponsorship and robust change management.
  • Build the Foundation – get the client comfortable with the tool and content before you introduce workflows. A strong foundation (your applications, dimensions, views, and viewpoints) is needed before you start the plumbing and wiring (workflows).
  • Brush up on Security – I haven’t discussed security extensively in this blog series, but the Oracle EDMCS User Guide does a nice job describing security requirements for assigning and approving workflow requests. Note that security enhancements have been introduced along with workflows. A new “Submitter” permission is now available to go along with Owner, Data Manager, and Browser. And permissions can be assigned at the Application, Dimension, Hierarchy Set, and Node Type levels.
  • Ponder the Approval Policy – this is the most interesting one to me. As we discussed in Part 2, approval policies can be defined at 4 points in the data chain (see Figure 1). With the inheritance and inter-dependencies of approval policies across the data chain along with the actions each policy can govern, it is critical to efficiently design your approval policies up front.

o   For example:

  • Suppose your client requires a final “audit” type of approval across the board for any type of request for any dimension. Or they always a require an upfront “gatekeeper” type of approval to make sure the request is justified and complete before it continues down the approval chain. These would be good candidates for an approval policy at the Application level. And it would avoid having to define duplicative approval policies at lower levels in the data chain.
  • Will your application contain dimensions that do not need data governance workflows? Then Application level approval policies should be avoided.
  • Say you want to limit and govern the actions of a specific group so it can only work with existing nodes (insert, remove, update). An approval policy at the Hierarchy Set level is probably best.

o   Overall, I believe approval policies at the dimension level are a good place to start. Then as the workflows evolve and requirements become more clear, you can determine if there are common factors across all dimension approval policies that can be consolidated at a higher level (Application level approval policy), or if there are specific subsets of actions that need to be broken out to a lower level (Node Type or Hierarchy Set level approval policy).

o   All of which brings up another interesting point: effective approval policy design directly ties into effective viewpoint design. Think about it – you can define the set of Allowed Actions (Add, Insert, Move, etc.) at a Viewpoint level. Which means what? Special-purpose maintenance views are likely required to support certain approval policies, especially those at the Node Type or Hierarchy Set levels.

Figure 1 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 3 - Image 1

How do EDMCS Workflows Compare with DRM/DRG?

I was reluctant to include this section at first because in general, I don’t like comparing Data Relationship Manager (DRM) and EDMCS. Yes, they are both master data management tools and yes, they do share some common concepts and terminology. But overall, the two products are so different in terms of philosophy, deployment design, and underlying architecture that I think comparing the products is often less than helpful.

However, with data governance and collaborative workflows, I feel there is enough commonality that it is worth highlighting a few items. So here goes:

Topic DRM/DRG EDMCS
Workflow Design
  • Based on workflow models and workflow tasks
  • Tasks linked to specific actions (Add Leaf, Add Limb, Insert, Move, etc.)
  • Based on Approval Policies
  • Approval policy level (Application, Dimension, Node Type, Hierarchy Type) determines context and scope of actions governed

 

Workflow Stages
  • Use a Submit stage, a Commit stage, and optionally, one or more Enrich and/or Approve stages
  • ·Use a Submit stage and (implied) Commit stage
  • Approval policies determine approval stages (sequential vs parallel, # of approvers)
  • Requests can be re-assigned for collaboration prior to Submit
User Interface (UI)
  • Form-based design
  • No forms
  • Requesters and approvers interact directly with the viewpoints
Approval Options
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
Escalations
  • Requests can be escalated based on defined intervals
  • Requests can be escalated based on defined intervals
Separation of Duties
  • Workflows can be configured to prevent a submitter from approving their own request
  • Workflows can be configured to prevent a submitter from approving their own request
Email Notifications
  • Generates email notifications
  • Generates email notifications
Other
  • Supports conditional workflows
  • Supports splitting of requests based on pre-defined criteria
  • Not yet supported

I’m curious if Oracle will introduce a form-based UI for workflows. Part of me would very much like to see that so that you can present a clean user interface to the approvers, hide unnecessary details, and display special instructions and messages, but part of me does not. One of my favorite features of EDMCS is the visual highlighting of pending request changes and the “shopping cart” of request items that are displayed prior to submitting a request. I would hate to lose that by going with a forms-based workflow UI, but perhaps there is a solution that combines the best of both worlds. 

Conclusion

Well that’s it, an initial look at workflows and approval policies in EDMCS. I’m excited to see how this functionality evolves and expands over time. Talk to you next time!

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

EDMCS and Data Governance – Part 2

Welcome to Part 2 of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Oracle Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release. This exciting feature addresses a major gap in EDMCS as the product continues to rapidly evolve and mature.

In Part 2, we dive into the details of how to configure workflows. This process revolves around the concept of an “approval policy.” Interestingly, approval policies can be configured at different points of the EDMCS data chain and cascade or inherit to affect downstream points of the data chain.

Workflow Stages

Before we dive into approval policies, let’s discuss EDMCS workflow stages a bit more. They are similar in concept to Data Relationship Governance (DRG) workflow stages. See Figure 1 for an overview:

Figure 1 – EDMCS Workflow StagesEDMCS and Data Governance – Part 2 - Image 1
  1. Submit (or Assign) Request – A request is initially created as you do today. But wait…there’s more! You can Submit the request to immediately move the request into the Approve stage OR you can Assign the request to colleagues to collaborate on the request together. When the request is ready, it is submitted to move to the Approve stage.
  2. Approve Request – The approver(s) have 3 choices:
    • Approve – the request is approved and moves forward (thanks Captain Obvious!).
    • Push Back – like DRG, the request is pushed back to the submitter for clarification or changes, who then updates and resubmits the request.
    • Reject – like DRG, the request is denied and closed. Think of “reject” as the RAID of the data governance world – it kills requests dead.
  3. Commit Request – once fully approved, the request is auto-committed and closed. EDMCS has now been updated.

Approval Policies

Now for approval policies. Approval policies can be configured at 4 levels:

  1. Application
  2. Dimension
  3. Node Type
  4. Hierarchy Set

It is important to note that each data chain object can contain one, and only one, approval policy. However, approval policies have a cascading impact so that multiple approval policies can work in concert to govern and control exactly what you want. Yes, you heard that right:  Approval Policy Inheritance – it’s not just for properties anymore!

The types of actions governed by an approval policy depend on the data chain object it is configured with – see figure 2 below:

Figure 2 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 2 - Image 2As you can see, policies defined at the Application or Dimension level govern all actions (add, delete, insert, remove, move, etc.) while policies defined at the Node Type or Hierarchy Set level govern a subset of actions. Why is this important? Because it means you need to carefully design what types of actions you want to govern and who will perform them. If I define an approval policy at the Hierarchy Set level and then submit a request that Adds 3 accounts, how many approvers are required for the request? A big ZERO! Since I requested “add” actions and only have an approval policy at the Hierarchy Set level, no applicable approval policy exists to govern the request.

Putting It All Together

Let’s walk through an example.

  1. Define Approval Policy

First, I will define an approval policy for the Account dimension. To do this, Inspect either the application or default viewpoint and access the Account dimension from the Definition tab. From there, click the Policies tab.

Here you will see the Approval policy for the Account dimension. Click on the Approval link to inspect the approval policy.

EDMCS and Data Governance – Part 2 - Image 3The General tab will display basic information about the approval policy. You can edit the approval policy name and description if necessary.

EDMCS and Data Governance – Part 2 - Image 4The Definition tab is where the magic happens. Select edit to update the following parameters:

  • Enabled – click this check box to enable the approval policy.
  • Approval Method – select Serial or Parallel.
  • One Approval Per Group – if using Serial approvals, this will automatically be set to “True.” If using Parallel approvals, you can select one approval per group or define a Total Required # of approvers.
  • Include Submitter – enable this to allow the submitter to also be an approver (the submitter’s approval will be automatically granted). If “separation of duties” is required for your company, do not enable this.
  • Reminder Notification – the # of days that will elapse before reminder emails are sent.
  • Approval Escalation – the # of times a reminder occurs before an escalation email will be sent.
  • Approval Groups – select user(s) and/or group(s) to be included in the approval process. When using Parallel approvals, the order of approval groups does not matter. When using Serial approvals, the order of approval groups does matter – you need to list the approval groups in the order that approvals should be executed.

With my example approval policy, I am using serial approvals, 2 approval groups (a Planning group and GL group), a reminder interval of 5 days, and an escalation interval of 2 reminders.

EDMCS and Data Governance – Part 2 - Image 5

  1. Submit Request

Now we’re cooking with gas. It’s time to submit a request. I will submit a request to my default Account viewpoint that includes 1 add, 1 property update, and 1 move. Here is the request in Draft status:

EDMCS and Data Governance – Part 2 - Image 6

Did you notice something new? Look at the Actions button next to Submit. This is where you can assign the request to another user and collaborate with him to finish up the request.

EDMCS and Data Governance – Part 2 - Image 7

EDMCS and Data Governance – Part 2 - Image 8

  1. Approve the Request

After the request is submitted, it is considered “in flight” because it has been submitted, but not yet approved/committed. And look! EDMCS now offers a nice Activity page on the home screen displaying the status of various workflow requests:

EDMCS and Data Governance – Part 2 - Image 9

First, the users in the Planning Approvers group will receive an email notifying them that they have been “invited to approve a request” (it’s very polite):

EDMCS and Data Governance – Part 2 - Image 10

As mentioned earlier, an approver has 3 choices: Approve, Reject, or Push Back. Reject and Push Back are available under the Actions dropdown. Here are the dialog windows that will be displayed for those actions (note the comment field is required):

EDMCS and Data Governance – Part 2 - Image 11

Otherwise, the approver will click the Approve button and see this:

EDMCS and Data Governance – Part 2 - Image 12

And then the same process will continue with the GL Approvers group since I am using Serial approvals. Once again, an approver can reject, push back, or approve. Once approved, the request is committed and closed.

Congratulations! You have now completed your very first data governance workflow request in EDMCS!

Conclusion

This blog post should be useful in providing more details and clarity on workflows, workflow stages, and approval policies. In the third and final post for this series, I’ll offer a recap and some closing thoughts. Talk to you then.

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 3

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

OPA! The Future of Cloud Integration – Important Updates Are Coming

Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM.  Why do they not like that I do this?  Well there is a master data management tool for Customer data that you can guess also uses the same acronym.  While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.

I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community.  We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.

Now any good Oracle roadmap update would not be complete without a safe harbor reminder.  What you read here is based on functionality that does not yet exist.  The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.

Ok, now that we have that out of the way, let’s get into the fun stuff.  There are a number of enhancements coming and planned, but today I am going to focus on two significant ones:  performance and ground to cloud integration.

Performance Enhancements

We’re all friends here, so we can be honest with each other.  CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does.  You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.

If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM.  Essbase and Planning became targets later.  Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS.  While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal.  This dynamic has plagued users of CDM for years.  It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.

To address performance concerns, Oracle is planning to support 3 workflow methods:

  • Full – No change from legacy process
  • Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export.  This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process).  The downside of this method is that drill through is not available.
  • Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg).  This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.

Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows.  To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.

If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement.  If you need assistance, feel free to reach out to us to connect with our team of data integration experts.

On-Premise Agent

Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud.  As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions.  While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM.  This is very much a push approach to data integration.

The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.

Opa! – a common Greek emotional expression. It is frequently used during celebrations.  Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!

This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM.  This approach is similar to how the FDMEE SAP adaptor currently works.  From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.

OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS).  A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request.  This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall.  If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.

The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle.  Additionally, OPA will support connecting to on-premises relational data sources.  Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.

OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process.  This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE.  This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.

The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting.  How you might ask?  In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud.  But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code.  This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.

The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment.  I am thrilled and can’t wait to get my hands on OPA for beta testing.  I’ll share my updates once I get through testing over the next couple of months.  I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public.  Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!