Using Subscriptions with EDMCS

As an earlier blog mentioned, the 18.07 release of Enterprise Data Management Cloud Service (EDMCS) delivered one eagerly anticipated piece of functionality: Subscriptions! And do not fear – these subscriptions are useful and do not involve a 1-year subscription to the Fruit of the Month Club (not that there’s anything wrong with that).

This blog post dives deeper into this new functionality, describes how it works, and highlights some lessons learned from utilizing Subscriptions with a current project involving multiple EDMCS Custom applications supporting multiple Profitability and Cost Management Cloud Service (PCMCS) applications.

Why Are Subscriptions Important?

Subscriptions are a huge step towards true “mastering” of enterprise data assets within a single master data Cloud platform. With EDMCS, it is important to build deployment-specific applications configured to the dimensionality requirements of the target applications to most effectively use the packaged adapters, validations, and integration capabilities. But in many cases, you also need to share common hierarchies across applications and avoid duplicative (that’s my big word for today) maintenance. After all, why have a master data management tool if you still must perform maintenance in multiple places? That’s just silly.

The answer to this dilemma is Subscriptions. By implementing Subscriptions, requests submitted to a primary viewpoint will automatically generate parallel subscription requests to subscribing viewpoints to automatically synchronize your hierarchy changes across EDMCS applications.

Note

This comment is important: “automatically generate parallel subscription requests.” EDMCS will not update a target, or subscribing, viewpoint behind the scenes with no visibility or audit trail to what has occurred. A parallel Subscription request will be generated along with the Interactive request that will be visible in the Requests window, along with the full audit trail and details that you find in an Interactive request. Even better, the Subscription request will generate an email and attach a Request File of the changes.

Nerdy Details

Views and Viewpoints

The first thing to really think about is the View and Viewpoint design of your EDMCS applications. Subscriptions are defined at the Viewpoint level, so you need to identify the source and target viewpoint for your business situation. With my current project, I have multiple EDMCS applications supporting multiple PCMCS applications. While the dimensionality is similar across the applications, the hierarchies vary, especially with the alternate hierarchies. So, it has been important to isolate the “common” or shared structures that should be synchronized across applications into their own viewpoint so that a subscription mechanism can be created.

Node Type Converters

You will likely need to create a node type converter. If the source and target viewpoints do not share a common node type, you must create a node type converter for subscriptions to work. In my situation, I had already created node type converters since I wanted to compare common structures across EDMCS applications, so the foundation was there to readily implement subscriptions.

Permissions

To create a Subscription, the creator must have (at a minimum) View Browser permission to the source view, View Owner permission to the target view, and Data Manager permission to the target application.

The Subscription assignee (this is the user who will “submit” the subscription request) must have (at a minimum) View Browser permission to the source view and Data Manager permission to the target application.

Creating a Subscription

Once the foundation is in place in terms of viewpoints, node type converters, and permissions, the actual creation of a subscription is easy.

Inspect the target viewpoint (the viewpoint that is to receive the changes from a source viewpoint via subscription), navigate to the Subscriptions tab, and click Edit. From there you can select the source viewpoint, the request assignee, and enable Auto Submit if needed. Save the subscription and you are all set.

  • Currently, there is no capability to edit an existing subscription. You must delete and add a new subscription to effect a change.
  • Any validation errors for your subscription will appear on this dialog as well. These are documented nicely in the Oracle EDMCS administration guide.

Using Subscriptions with EDMCS Image 1

Auto-Submit and Email Notifications

Emails will be generated and sent to the Request Assignee, whether Auto-Submit is enabled or not. The email will include details such as the original request #, the subscription request #, and how many request items were processed or skipped.

Using Subscriptions with EDMCS Image 2

Note

  • Remember, the subscription request will have a Request File attached to it. View the request file attachment to see details on why specific request items were skipped.
  • The request file is not attached to the email itself, only to the request in EDMCS.

Lessons Learned

Like I mentioned earlier, the foundation is important to making subscriptions work. And it all boils down to design and ensuring the building blocks of that foundation are in place:

Design, Design, Design!

  • The importance of dimension, view, and viewpoint design cannot be overstated. For each dimension, evaluate the primary and alternate hierarchy content and identify what will be shared across dimensions or applications and what will be unique to each dimension and application.
  • Based on that analysis, carefully design your viewpoints to enable subscriptions across EDMCS applications for hierarchies that truly need “mastering.”
  • As early as possible, identify the EDMCS user population along with permission levels for applications and views. This is important to identify the appropriate “Request Assignee” for your Subscriptions. I recommend creating a security matrix identifying each user and the permissions each will have.
  • Without a clear and well thought out design, you will find yourself constantly re-doing your views and viewpoints which, in turn, will cause constant rework of your subscriptions. The “measure twice, cut once” adage certainly applies here!
  • I am a big proponent of standard, consistent naming conventions to improve the usability and end user experience. The same holds true for Subscriptions. Consider using a standard naming convention for your viewpoints so it is clear which viewpoints have a subscription. It’s not obvious – unless you Inspect the viewpoint – that a subscription exists.
    • One approach I’ve been using is to name my source and target viewpoints identically with a special tag or symbol at the end of the target viewpoint name to indicate a subscription is present. I’m sure there are other and probably better ideas, but I find the visual cue to be helpful.
    • Perhaps in the future, Oracle will display subscription details when you hover over a viewpoint name (hint hint).

Node Type Converters

  • Ensure you have node converters in place
  • Make sure your node type converters are mapping all required properties.
    • I ran into an issue where updates to one property in my source viewpoint were not being applied to my target viewpoint via subscription requests, but all other property updates worked fine. The reason? I had recently modified my App Registration and added this property to a dimension’s node type. But my node type converter had already been created and wasn’t mapping or recognizing the new property. Once I updated my node type converter, the problem was solved.

Troubleshooting

  • The request files attached to subscription requests are a valuable troubleshooting tool. Status codes and error messages are included in these Excel files that are extremely helpful to determine why your request was not auto-submitted.
  • Inspect the Subscriptions on your viewpoints. Any validation issues will be displayed and are easily addressed. Typical Subscription validation errors include:
    • The request assignee no longer has the correct permission levels
    • The viewpoint no longer is active
    • A node type converter is missing

Conclusion

I have been looking forward to the subscription functionality in EDMCS and am pleased with it so far. Subscriptions are easy to configure, can be configured to auto-submit if desired, and generate emails to remind the requester a request has occurred and to act if the request was not submitted or request items were skipped. EDMCS Subscriptions are a big step forward to enabling true mastering of your enterprise data management assets!

Labor Budget Increases, Staffing Shortages Loom Large for Healthcare Execs in 2019; Set Expectations Now and Uncover Your Capabilities for an Enterprise-Based Labor Productivity Solution!

The two resounding topics on healthcare websites and in related blog posts:   (1) increased labor costs and (2) burnout or shortages of clinical staff.  The article published in “Healthcare Finance” Labor Budget Increases, Staffing Shortages Loom Large for Healthcare Executives in 2019 highlights this exact topic.

This isn’t surprising considering access to healthcare for all has increased; therefore, there are more patients to see which, in turn, requires more staff which results in increased labor costs…see where I’m going here? It’s easy to see how this can quickly become a major concern for providers to analyze and keep up with demand.

It becomes evident while working with numerous healthcare clients that not all healthcare companies are treated equally regarding their maturity scale when answering specific labor questions, providing/analyzing data, or even supporting a labor productivity solution. Edgewater Ranzal’s complimentary Healthcare Labor Productivity Assessment Workshop not only helps reset clients’ expectations, but also uncovers clients’ enterprise-based labor productivity solution capabilities.

Our solution utilizes Oracle Cloud or on-premise technology to help clients see an immediate return-on-investment just by analyzing contract agency usage statistics, providing detailed overtime analysis, and offering the ability to compare productivity across national standards that are loaded into the system. Additionally, we help clients align their labor productivity solutions with their planning/budgeting processes to improve budget detail and accuracy.  Comprehensive experience with data integration – often a challenging task for clients – allows us to work with staff to bring all the required data elements together to create a cohesive picture of labor productivity details.

Take a look at our webinar recording of The Key Ingredients to Understanding Labor & Productivity to learn more about our solution to uncover best practices in addressing labor productivity in your organization.  Then contact Edgewater Ranzal’s Healthcare experts to answer specific questions about implementing a solution to help cut labor costs and provide data-rich analytics to your organization.

PCMCS…Yeah, FDMEE Can Do That!

Oracle Profitability and Cost Management Cloud Service and Oracle Financial Data Quality Management Enterprise Edition Working Together Better

Over the last year, we have been fielding, positioning, and aligning more with Oracle’s new Cloud products. Some of the most common questions we are asked are:

  1. Has Edgewater Ranzal done that before?
  2. What “gotchas” have you encountered in your implementations and how have you addressed them?
  3. What unique offerings do you bring?

These are all smart questions to ask your implementation partner because the answers provide insight into their relevant experience.

Has Edgewater Ranzal done that before?

Edgewater Ranzal is an Oracle PCMCS thought leader and collaborates with Oracle as a Platinum partner to enhance PCMCS with continued development. To date, we’ve completed nearly 20 PCMCS (Cloud) implementations, and almost 80 Oracle Hyperion Profitability and Cost Management (HPCM – on premise) implementations spanning multiple continents, time zones, and industries. Our clients gladly provide references for us which is a testament to our success and abilities. Additionally, we frequently have repeat clients and team up with numerous clients to present at various conferences to share their successes.

As a thought leader in the industry and for PCMCS, we sponsor multiple initiatives that deliver implementation accelerators, test the latest product enhancements prior to their release, and work in tandem with Oracle to enhance the capabilities of PCMCS.

Our Product Management team is comprised of several individuals. Specifically for PCMCS, Alecs Mlynarzek is the Product Manager and has published the following blog: The Oracle Profitability and Cost Management Solution: An Introduction and Differentiators.  I am the Product Manager for Data Integration and FDMEE with several published blog posts related to FDMEE.

Now let’s explore some of the data integration challenges one might unexpectedly encounter and the intellectual property (IP) Ranzal offers to mitigate these and other data integration challenges that lurk.

What gotchas have you encountered in your implementations and how do you mitigate them?

We could go into great depth when detailing the PROs for using FDMEE with PCMCS…but it is much more beneficial to instead share some of the other less obvious discoveries made. Note that we work directly and continuously with Oracle to improve the product offering.

  • Extracting data via FDMEE data-sync is challenging. The size of the data cube and configuration settings of PCMCS has a threshold limit – 5,000,000 records and a 1GB file size – both of which are quite often reached. As a result, we have developed a custom solution for the data-sync routine.
  • Large datasets directly into PCMCS via DM (Cloud-based Data Management) can exhibit performance problems due to the server resources available in the Cloud. Functionality in on-premise FDMEE (scripting, Group-By, etc.) helps reduce the number of records going into the Cloud and therefore provides a performance gain.
  • Patching to the latest FDMEE patch set is crucial. Cloud applications (PCMCS, FCCS, E/PBCS) update monthly. As a result, we need to consistently check/monitor for FDMEE patches. These patches help ensure that canned integrations from Oracle are top-notch.

FDMEE_PCMCS Image 1

  • Executing two or more jobs concurrently via EPMAutomate is quite troublesome due to the workflows needed and how EPMAutomate is designed. As a result, we have invested considerable time into cURL and RESTful routines. We discovered that the login/logout commands are tied to the machine, not the user-process, so any logout from another executing run logs out all sessions.

FDMEE_PCMCS Image 2

  • The use of EPMAutomate is sometimes difficult. It requires a toolset on a PC – “JumpBox” – or on-premise EPM servers. It also requires the use of .BAT files or other scripted means. By using FDMEE, the natural ease of the GUI improves the end-user experience.
  • Loading data in parallel via FDMEE or DM can cause Essbase Load Rule contention due to how the automatic Essbase load rules are generated by the system. Oracle has made every effort to resolve this before the next Cloud release. Stay tuned… this may be resolved in the next maintenance cycle of PCMCS (18.10) and then the on-premise update of patch-set update 230.
  • We all know that folks (mainly consultants) are always looking to work around issues encountered and come up with creative ways to build/deliver new software solutions. But the real question that needs to be asked is: Should we? Since FDMEE has most of the solutions already packaged up, that would be the best tool for the job. The value that FDMEE can bring is scores better than any home-grown solution.

What unique offerings do you bring?

At Edgewater Ranzal, we have started to take some of our on-premise framework and adopt it for PCMCS. Some of the key benefits and highlights we provide are:

  • To combat the complications with loading data via FDMEE because of FDMEE’s inability to execute PCMCS clears out-of-the-box, we have added the functionality into the Ranzal IP catalog and can deploy this consistently for our clients. This is done via the RESTful functionality of PCMCS. Some of the items we have developed using REST are:
    • Import/export mappings
    • Execute data load rules or batch jobs from 3rd party schedulers
    • Refresh metadata in the Cloud
    • Augment EPMAutomate for enhanced flexibility
    • Execute business rules/clear POV commands as part of the FDMEE workflow
    • Execute stored procedures (PL/SQL) against DBaaS (see below)
    • Enhanced validation framework (see below)
  • We have redeveloped our Essbase Enhanced Validate to function with the PCMCS Cloud application. FDMEE on-premise can now validate all the mapped data prior to loading. This is great for making sure data is accurate before loading.

FDMEE_PCMCS Image 3

  • The Edgewater Ranzal tool-kip for FDMEE includes the ability to connect to other Cloud offerings for data movements, including DBaaS and OAC.

FDMEE_PCMCS Image 4

Can FDMEE do that…and should FDMEE do that?

Yes, you should use FDMEE to load to PCMCS, and it is an out-of-the-box functionality! As you can see, unlike DM whose feature comparison to FDMEE will be discussed in a later blog and white-paper, there are a lot of added benefits.  The current release of FDMEE v11.1.2.4.220 provides product functionality enhancements and has greater stability for integrations with most Cloud products.  Suffice it to say, having python scripting available and server-side processing for large files will greatly enhance your performance experience.

FDMEE_PCMCS Image 5

Contact us at info@ranzal.com with questions about this product or its capabilities.

Retro Reboot #1: Set It & Forget It – Scheduling FDMEE Tasks

As with most nostalgic items, reboots are the next best thing. From video game consoles to television shows, they are all getting a modern facelift and a new prime-time seat on television.  I have jumped on that band-wagon to revitalize a previous post authored by Tony Scalese: Set it & Forget It – Scheduling FDM Tasks.

As with most reboots, there must be flair and alluring content to capture old and new audiences. Since Oracle Financial Data Quality Management Enterprise Edition (FDMEE) has been in the Enterprise Performance Management (EPM) space for a while and has moved into the Cloud, this is a great time for its reboot!

Oh Great…A Reboot. Now What?

Scheduling tasks in FDMEE has never been easier. Oracle provides several ways to do this for a variety of out-of-the-box activities.  Is there a report that you want to run and email every hour?  Or how about a script that needs to run hourly?  Or maybe batch-automation every 15 minutes?  No worries!  FDMEE can handle all of that with out-of-the-box functionality.

Let us pause for a moment and determine what is needed to make this happen:

  1. Is there a business case and justification for what is about to be scheduled?
  2. Who benefits and how will they be notified of the results?
  3. Is there a defined frequency for which the activity must take place?

Getting Started

First, understand that the scheduling for FDMEE is built directly into the Graphical User Interface (GUI) anywhere you see the “SCHEDULE” button. Unlike the previous FDM counterpart which had it as an independent utility to be installed/configured, the ease of having it via the Web has removed some complexity.

A word of caution:  while this screen allows items to be scheduled, there isn’t a screen that shows “what has been” scheduled.  To do that, access to the Oracle Data Integrator (ODI) is needed, but more on this later.

Initially, the screen shows the types of schedules that can be created and their relevant inputs.

Retro Reboot Screen Shot 1

Below is a reference guide to outline FDMEE’s scheduling capabilities.

Schedule Type Inputs Notes / Examples
Simple TimeZone, Date, HH:MM:SS, AM/PM Single run based on the specified inputs.

 

Example:  Run 08/02/2018 @ 11AM

Hourly TimeZone, MM:SS Repeatable run at the specified time MM:SS time.

 

Example:  Run every hour, at the 22minute mark.

Daily TimeZone, HH:MM:SS, AM/PM Every day at the specified time.

 

Example:  Run every day at 11AM.

Weekly TimeZone, Day of the Week, HH:MM:SS, AM/PM Every specified day at the specified time.

 

Example: Run every Monday thru  Friday at  11AM.

Monthly
(day of month)
TimeZone, Date, HH:MM:SS, AM/PM Specified day at the specified time.

 

Example: Run on the 2nd day of every month at 11AM.

Monthly
(week day)
TimeZone, Iteration, Weekday, HH:MM:SS, AM/PM Specified interval and week day at the specified time.

 

Example: Run every third Tuesday at 11AM.

Why Does the Job Run Under My UserID?

That is because the system assigns the user’s credentials who created the schedule. What can go wrong with that, right?!  Well, if a user no longer exists or a password is changed, the existing jobs will no longer run.

The following considerations should be observed:

  1. Dedicate a service account that is not being used by an employee to be used for server/automation actions.
  2. This account can be a “native” user; since the account is only used internally for EPM products, having a domain account is not needed.
  3. Non-expiry passwords are best.

 It is Scheduled…Now What?

After the item is scheduled, what really happens? The action executes at the scheduled time!  Actions can easily be monitored via the FDMEE Process Details screen.  Now all the possibilities of scheduling the following can be explored:

  1. Data Load Rules
  2. Script Executions
  3. Batch Executions
  4. Report Executions

Also, as mentioned earlier, there is no way to see the batches inside of FDMEE. For that, information can be retrieved in a few ways.  The easiest way to see what is scheduled is to use the ODI Studio.

The ODI Studio provides details as seen in the screen shot below:

Retro Reboot Screen Shot 2

Any scheduled tasks will be listed under “All Schedules.” Simply double click them to obtain details related to that task.

Retro Reboot Screen Shot 3

Another effective option is to write a custom report that displays the information. My previous blog post, Easy Value with FDMEE Reports, provides further details of FDMEE report options and their value.  This would allow a report to be executed to provide a user-friendly report.

Seriously … What Now?

By now, you may have noticed from the previous blog post Scheduling FDM Tasks – A Second Option by Tony Scalese that the upsShell process is quite handy.  It allows other tools to control the FDM jobs…maybe through a corporate scheduler.  Now that most organizations have a corporate scheduler, the new FDMEE options below must be learned:

Command Purpose
Executescript.bat / .sh Executes an FDMEE Custom Script
Importmapping.bat / .sh Executes an import from text-file for Maps
Loaddata.bat / .sh Executes a Data Load Rule
Loadhrdata.bat / .sh Executes an HR Data Load Rule
Loadmetadata.bat / .sh Executes a Metadata Load Rule
Runbatch.bat / .sh Executes a defined Batch
Runreport.bat / .sh Executes a defined Report

*All files are stored in the EPM_ORACLE_HOME\products\FinancialDataQuality\bin\

In the example below, the command, when launched, executes a Data Load Rule for Jan-2012 thru Mar-2012:

Retro Reboot Screen Shot 4

There still must be a better solution…right? Things to overcome:

  1. What happens if the scheduler is Windows-based and the server is Linux?
  2. How does a separate scheduling server communicate with EPM? Does it have to be installed on each EPM Server?
  3. How can we monitor and get details of a job once it is kicked off?

What Happens if You Don’t Want to Run the .BAT/.SH Files?

You’re in luck! With the introduction of new functionality to FDMEE, RESTful APIs are also now available.  With the RESTful APIs, not only can you execute a job, but you can also loop and monitor for the results.  This enhances the previous .BAT/.SH file routines and provides a cleaner and more elegant solution.

Command Purpose
Running Data Rules Execute a Data Load Rule
Running Batch Rules Execute a Batch Definition
Import Data Mapping Import Maps
Export Data Mapping Export Maps
Execute Reports Execute a Report

*URL construct: https://<SERVICE_NAME>/aif/rest/V1

The below example is just querying for a process:

Retro Reboot Screen Shot 5

The Future…

As Oracle moves forward to enhance the RESTful APIs, many doors continue to open for FDMEE and tool scheduling. At Edgewater Ranzal, we fully embrace the RESTful concept and evolve our solutions to utilize this functionality.  The result is improved support and flexibility of FDMEE and the future of Oracle Cloud products.

Contact us at info@ranzal.com with questions about this product or its capabilities.

The Oracle Profitability and Cost Management Solution: An Introduction and Differentiators

What is Oracle Profitability and Cost Management?

Organizations with world class finance operations generally can close in a minimal number of days (2-3 in an ideal organization) and have frequent and efficient budget and forecast cycles while also visiting different ‘what if’ scenario analysis along the way. These organizations often deliver in-depth profitability and cost management analysis reports at fund, project, product, and/or customer level, completing the picture of an accurate close cycle.

Oracle offers packaged options in support of all these finance processes, but the focus of this post will be Profitability and Cost Management (PCM).

One of the most painful and time-consuming processes for any business entity is PCM analysis. The reasons why cost allocations processes are time consuming are too many to count – from model complexity to data granularity, driver metric availability, rigidity of allocation rules, delays with implementing allocation changes, and almost impossible-to-justify results. Instead of focusing on the negative aspect, let’s focus on what can be done to alleviate such pain and energize the cost accounting department by giving it access to meaningful and accurate data and empowering users through flexibility to perform virtually unlimited “what if” analysis.

The PCM Journey

The initial Profitability and Cost Management product, like almost all Oracle EPM offerings, was released on-premise in July 2008 and is known as Oracle Hyperion Profitability and Cost Management (HPCM). 10 years later, HPCM continues to deliver an easier way to design, maintain, and enhance allocation processes with little to no IT involvement as it has since it was initially launched, but with a greater focus on flexibility and transparency. The intent for HPCM was to be a user-driven application where finance teams would be involved beginning with the definition of the methodology all the way to the steps needed to execute day-to-day processing. Any cost or revenue allocation methodology is supported via HPCM while graphical traceability and allocation balancing reports support any query from top-level analysis all the way down to the most granular detail available in the application.

There are 3 HPCM modules available on-premise today. Each was designed and developed for a different type of allocation methodology or complexity need:

  1. Simple allocations – Detailed Profitability (a.k.a. single-step allocations. Example: From Accounts and Departments, allocate data to same Accounts, new target Departments, and to granular Products/SKU based on driver metric data. This module allows for a very high degree of granularity with dimensions >100k members, but it does not cater to complex driver calculations or to allocations requiring more than 1 stage).
  2. Average to high complexity allocations – Standard Profitability (a.k.a. multi-step allocations of up to 9 iterations/stages, allowing for reciprocal allocations. Example: Allocations from accounts and departments to channels, funds, and other departments. Allocation of results from previous steps are redistributed onto Products, Customers etc. Driver metric complexity is achievable with this module; custom generated drivers are available as well, but there are limitations regarding driver data granularity, granularity of allocated data, and overall hierarchy sizing).
  3. High complexity allocations – Management Ledger (unlimited number of steps, high number of complex drivers, custom driver calculations, custom allocations, more granularity, and increased flexibility in terms of defining and expanding allocation methodology). This is the last module added to the HPCM family and the only one available as SaaS Cloud Offering.

The Cloud is Your Oyster

In 2016, Oracle introduced the Cloud version of HPCM: Profitability and Cost Management Cloud Service (PCMCS).  PCMCS is a Software as a Service (SaaS) offering, and as with many of Oracle’s Cloud offerings, PCMCS includes key improvements that are not available in the on-premise version, and enhancements are made at a much faster pace.

There is currently no indication that the two HPCM modules – Detailed and Standard Profitability – will make their way to the Cloud, since increased allocation complexity as well as increased hierarchy sizing supported by the Management Ledger module caters to most, if not all, potential requirements.

The Management Ledger module included with the PCMCS SaaS subscription has a core strength in the ease of use and flexibility to change, enabling finance users to define and update allocation rules and methodologies via a point-and-click interface. While the initial setup is advisable to be performed with support from an experienced service provider, the maintenance and expansion of PCMCS (Management Ledger) models can be achieved by leveraging solely functional resources, in most cases. “What-if” scenario creation and analysis has never been easier. Users not only can copy data and allocation methodologies between scenarios, but they can also update the data sets and allocation steps independently from a standard scenario, generating as many simulation models as they need, gaining increased insight into decision making.

Standard Profitability models perform allocations in Block Storage Databases (BSO). While BSO applications are great for complex calculations and reciprocal allocation methodologies, they have the disadvantage of being limited in terms of structure or hierarchy sizing. This hierarchy restriction is not as pressing in Aggregate Storage Option (ASO) type applications, which is the technology used by Management Ledger. The design considerations for a Standard Profitability model are also significantly more rigid when compared with the Management Ledger module, which has no limitations regarding allocation stages, allocation sequencing, or a maximum number of dimensions per each allocation step.

Detailed Profitability models heavily leverage a database repository while any connected Essbase applications are used solely for reporting purposes. Initial setup and future changes, outside of the realm of simply adding new hierarchy members, will require specialized database management skills, and the usage of a single step allocation model is not as pervasive. Complex allocation methodologies may require the usage of Detailed Profitability models in conjunction with Management Ledger, but these situations represent the exception rather than the rule.

Why Should You Choose Oracle Profitability and Cost Management?

One of the key strengths for HPCM, available since it was released, and now included in PCMCS, is transparency – the ability to identify and explain any value resulting from the allocation process, with minimal effort. Each allocation rule or allocation step is uniquely identified, enabling users to easily navigate via the embedded/out-of-the-box balancing report to the desired member intersection opened through a point and click action in Excel (using Smart View) for further analysis and investigation. The out-of-the-box-program documentation reports identify the setup of each rule and can be leveraged for quick search by account, department, segment code, or any other dimension available in the application. The execution statistics reports delivered as part of the PCMCS offering enable users to quickly understand which allocation process is taking longer than expected and identify opportunities for overall process improvement or to simply monitor performance over time. These two out-of-the-box reports – execution statistics and program documentation – are the most heavily used reports during application development, troubleshooting, and particularly when new methodologies are developed. Users can quickly search through these documents, leverage them to keep track of methodology changes, and use them as documentation for training new team members.

Performing mass updates to existing allocation rules has never been faster. PCMCS contains a menu that allows end users to find and replace specific member name references in their allocations for each individual data slice, allocation step, or an entire scenario. A quick turnaround of such maintenance tasks results in an increased number of iterations through different data sets, giving the cost accounting team more time to perform in-depth analysis rather than waiting for system updates.

PCMCS-embedded analytics and dashboarding functionality is also a significant differentiator, enabling end users to create and share dashboards with the rest of the application users through the common web interface and without the need for IT support. Reports created in PCMCS are available immediately and without time consuming initial setup or migrations between environments followed by further security setup tasks.

A comparison of On-Prem vs Cloud will be available in a future post, so please subscribe below to receive notifications for PCMCS-related blog updates.

Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

In the previous post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, I showed you how to rebuild ARCS down to the Profile Segments to speed things up. This time we’re slowing everything down…

So grab a glass of wine and throw on your Marvin Gaye vinyl because we’re getting it on with ~~automation~~. Oh yeahhh…

The sexiest topic of account reconciliations (didn’t think you’d ever see that sentence, did ya?) consistently revolves around automation. Yes, ARCS provides a central repository. Yes, ARCS is auditable. YES, ARCS shows a traceable workflow throughout the reconciliation cycle. All of these features are highly useful and absolutely a prerequisite to an enterprise worthy solution, but if you want to really grab people’s attention in a design session, start talking about the things they won’t have to do. ARCS provides both out-of-the-box functionality as well as customizable tools that help preparers  focus on high-importance reconciliations rather than spending time on low value-add or monotonous items.

Automation occurs in two areas: outside of ARCS (e.g. data feeds) and within ARCS (e.g. auto reconciliations and rules). Setting up the former enhances the latter. Either Cloud Data Management (CDM) or Financial Data Quality Management Enterprise Edition (FDMEE) can be used to load data to ARCS, albeit in different manners, but how this is accomplished is beyond the scope of this post. This data can be sourced from a variety of general ledgers and sub ledgers/subsystems including Financials Cloud, E-Business Suite (EBS), PeopleSoft, JD Edwards, and even *gasp* Excel (…if we have to…). By automating these data feeds directly from the source, management can be confident in the validity of the data (e.g. accuracy, no manual intervention or “massaging,” live, etc.) and, with scheduling, administrators have one or more fewer task(s) to worry about. The latest application data is up-to-date by the time the office doors open. Additionally, data refreshes can occur multiple times throughout the reconciliation cycle without concern for loss of work. ARCS will only update reconciliations with differences from the last data load and will change the workflow status if data has been modified and needs to be looked at again.

Within ARCS, the “bread and butter” for gaining efficiencies in the reconciliation cycle is through utilizing the out-of-the-box auto reconciliation method property on the Profiles. This will set the conditions under which the reconciliation will automatically change the workflow status to “closed,” allowing preparers to focus on the remaining “open” reconciliations that require attention. Which conditions are available for selection depends on the Format type. Furthermore, this field can be easily updated after-the-fact. Using the Actions pane, this property can be updated to a mass of Profiles based on custom filtering.

Automation in ARCS 1

[Screenshot 10a: The “Set Attribute…” functionality from the Actions pane is a powerful tool that can be used to make mass updates from the user interface.]

 

Automation in ARCS 2

[Screenshot 10b: In this example, the “Set Attribute…” functionality can be used to make updates to the Auto Reconciliation Method property for all Profiles, selected Profiles, or Profiles that fit customized criteria.]

The “Set Attribute” functionality is a powerful tool for making changes across multiple Profiles within the ARCS user interface. In many instances, this is a preferable alternative to extracting the Profiles to a text file to modify offline. Screenshots 10a – 10b show how it can be used to update the Auto Reconciliation Method attribute specifically, but there are a plethora of other attributes that can be updated in this manner.

The last puzzle piece to the trinity of automation is customized rules. Similar to custom attributes, rules can be added in a variety of places within your reconciliations to further enhance and streamline the process for both end-users and application administrators. Attributes, formats, profiles, and even specific transaction types (ex. on Subsystem Adjustments, but not on Source System Adjustments) can contain separate sets of rules.

Automation in ARCS 3

[Screenshot 11a: Rules can be added at a Format level.]

 

Automation in ARCS 4

[Screenshot 11b: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a Format level.]

 

Automation in ARCS 5

[Screenshot 11c: Rules can be added at a specific transaction type. In this screenshot, any rules created here would only affect Subsystem Adjustments and would not affect System Adjustments.]

 

Automation in ARCS 6

[Screenshot 11d: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a specific transaction type level.]

Thus, rules can be used for anything from sweeping, application-wide changes down to differences at a transaction-by-transaction basis, as seen in Screenshots 11a – 11d. If ARCS is a suit, then rules are custom tailoring; they are made to fit your company’s specific needs.

The most common rule I see relates to Auto-Submission (as opposed to Auto Reconciliation). The out-of-the-box auto reconciliation methods previously discussed are set on Profiles and can be used to “close” a reconciliation for the period if the criteria is met. However, sometimes a reconciliation still needs reviewing such as if it is considered higher risk or only during certain periods in the fiscal year. Customized rules can dynamically determine which reconciliations can skip the preparer and be assigned directly to the reviewer, and which are clear to be automatically “closed” for the month (e.g. without approval by a preparer or reviewer). Tailoring rules in this manner still helps the preparers reduce their workload while giving management the confidence that the higher priority reconciliations are being reviewed – the best of both worlds!

No Mistakes with Modularity from “Day 1” to “Day 100”
So, there you have it: the four main manifestations of ARCS’ modularity. While nothing will replace proper planning, ARCS does not permanently punish any application decisions you (or your partner) have made in the past. The tool is able to grow with your company and accommodate your needs as they arise. There’s no reason to pick “today” or “tomorrow” – have them both.

Am I right? Am I off my rocker? You tell me! Answer in the comments below if ARCS’ (or ARM! We haven’t forgotten you…) has been able to accommodate the changes with your company’s growth.

If you like what you’ve read, please consider sharing this article through social media. And let me know in the comments what topic(s) you would like to see covered in future posts.

*Screenshots taken from the patch 1806 release.

Catching up with EDMCS

Last time, in the Wonderful World of Enterprise Data Management Cloud Service (EDMCS), we discussed initial impressions of this exciting new Oracle Cloud product and highlighted some early functionality enhancements.

But do you realize how much functionality has been added to EDMCS since its initial release in January 2018? The short list is impressive:

  • Enhanced node alignment/location in side-by-side viewpoint compares
  • Exposed REST API operations including dimension imports/exports and request creation/submission
  • Enhanced searching across members (name and descriptions) and data objects
  • Lifecycle management of data objects
  • Incremental imports
  • Viewpoint download from selected node

Furthermore, in the areas of REST API and metadata integrations, Tony Scalese, Vice President at Edgewater Ranzal and Oracle ACE, has written several blogs. These posts were written from the perspective of hands-on, real-world experience by working with one of three customers accepted into the Oracle EDMCS Early Adopter program. The blog posts include:

In this post, I’d like to highlight another feature that was recently added to EDMCS: enhanced request load files.

Enhanced Request Load Files

In the initial release, EDMCS provides a mechanism to perform bulk updates to EDMCS hierarchies – the Excel request load file. While the feature immediately had some advantages over its distant cousin in Data Relationship Manager (DRM) (action scripts), there were limitations. Primarily, EDMCS would only recognize the first tab or worksheet in an Excel file.

Well that has been fixed! Request load files can now contain multiple worksheets, and EDMCS will recognize all of them (provided the worksheet names match your viewpoint names of course). Additionally, EDMCS will automatically select all valid worksheets to load into EDMCS when loading a request file. This makes it very easy to download viewpoints to Excel and build a request file containing updates for multiple viewpoints to bulk upload at one time.

This also means you need to be careful! Since EDMCS auto-selects any matching worksheet name, if you were not paying attention, you could accidentally load outdated requests from a worksheet. But you can still delete any unwanted request items prior to submitting the request, if you catch them first.

Catching Up on EDMCS 1

While you could always load multiple request files in a single request since the initial release of EDMCS, this feature is a nice usability and productivity enhancement. It works great for situations such as adding a node to a primary hierarchy/viewpoint and inserting it into an alternate hierarchy/viewpoint, all from the same request.

Conclusion (and a teaser)

While EDMCS is the new kid on the block in the Oracle EPM cloud space, it’s exciting to see how it’s quickly closing the gap with new functionality being added regularly! REST API operations, enhanced request files, and the other enhancements mentioned above show how far EDMCS has come in just 6 months.

But wait, there’s more!

The 18.07 release of EDMCS looks to be a HUGE release chock full of new features, including one I am especially excited for: subscriptions!

Look for more blog posts coming soon to discuss the subscription functionality and utilizing EDMCS for a Profitability and Cost Management Cloud Service (PCMCS) implementation.

Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up

We talked about adding new scope in New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons and modifying your application inside (i.e. changing reconciliation methods) and outside of ARCS (i.e. new data feeds) in Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning.

Today, we’re going to tear it down and rebuild from the ground up.

Let me start with this:  redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

ARCS From the Ground Up 1[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

ARCS From the Ground Up 2[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

ARCS From the Ground Up 3[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

ARCS From the Ground Up 4

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

ARCS From the Ground Up 5[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Join me in the last post of the ARCS modularity series – a real crowd pleaser: Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

*Screenshots taken from the patch 1806 release.

The Data Governance Triple Crown

A few weeks ago, those who follow horse racing witnessed a historic event. The race horse Justified captured the Triple Crown by winning the Belmont Stakes following earlier victories in the Kentucky Derby and Preakness Stakes. Justified became only the 13th horse in history to capture the Triple Crown, and the second horse to do so in the last 4 years (American Pharoah captured the honor in 2015). Interesting side note: both Justified and American Pharoah were trained by Bob Baffert. Why does that matter? Because he’s a fellow Arizonan native and University of Arizona alumnus, that’s why! Bear Down!

While it may be a stretch, the concept of a “triple crown” of sorts has been on my mind recently as it relates to recent Oracle Enterprise Performance Management (EPM) projects I’ve been working on involving Oracle Data Relationship Management (DRM) and Data Relationship Governance (DRG). Many people are familiar with the DRG module of the DRM product, but when the tool is coupled with two other critical components, you are well on your way to capturing the Data Governance Triple Crown.

1.    Tool – Data Relationship Governance

As you may know, DRG is a module of the DRM product and provides a governance framework for maintaining your DRM master data. DRG includes functionality such as workflows, approvals, email notifications, and separation of duties (to prevent someone from approving his own request). Workflows are often structured around dimension maintenance and may include requests like “Add Account,” “Update Account,” or “Move Account.” The workflow then guides the requester to select tasks and complete fields on a data entry form. Once submitted, the request enters optional enrichment stages where additional detail and context is added to the request before finally being committed and updating the relevant DRM structures.

Here are just a few of the key features in DRG:

  • Requests can be entered interactively or via bulk upload files
  • Documents (such as supporting request documentation, emails, or policies) can be attached to requests
  • Comments/supporting narrative can be included
  • Requests can be pushed back to a prior stage, approved, or rejected
  • Request can generate email notifications to approvers and/or participants in a workflow requests
  • Requests can include validations, calculated fields, and conditional criteria to enter or bypass specific stages in the workflow

While I could go on and on about DRG, I’ve noticed a DRG implementation is most effective when paired with two other components.

2.    Process – Data Governance Program

In my experience, DRG implementations are most successful when bundled into a broader data governance program. Data governance programs bring together the Tool (DRG), the People (data stewards, data specialists, data governance council), and the Process (process flows, metrics, and standards).

Key facets to an effective data governance program include:

  • Executive sponsorship
  • Data Governance Council
  • Clear Roles and Responsibilities
  • Standards (metrics, definitions, process flows)
  • Authority and Accountability

Data governance programs are not easy! The change management aspect to implementing effective data governance cannot be underestimated. There will be natural resistance, pushback, and challenges to any type of change, and data governance initiatives are no exception. Data governance implementations require patience and perseverance, and at times, even a bit of the “carrot and stick” approach. As a result, we have seen the following steps as crucial to getting your data governance program off the ground:

    1. Define Charter Team and Responsibilities
    2. Define the Mission Statement
    3. Define the High-Level Scope
    4. Define the Terminology and Standards
    5. Define the Current State Overview
    6. Define the Future State Vision
    7. Define the Draft Phased Approach
    8. Prepare the Project Charter
    9. Present the Project Charter for Executive Approval
    10. Ensure Executive Support

While there is much more content to dive into on a data governance program that is beyond the scope of this blog, I hope you appreciate the importance of People and Process in a data governance initiative and do not focus only on the Tool.

3.    Integration – DRM to External Systems

The third and final component to effective data governance, after the Tool and Process, is integration to external systems. This allows DRM to truly become the master data hub in your company’s eco-system and systematically push master data (which could include trees/hierarchies, base members, mappings, or all of the above) to both upstream and downstream systems.

By leveraging DRM’s robust integration capabilities and adding in some custom SQL or ETL integration as needed, DRM can produce master data in various forms (flat files, SQL tables, web services, external commits) for consumption by external applications. And these integrations can be run on-demand or scheduled.

Summary

So there you have it. Three critical components to effective data governance: a good tool (DRG), a robust process (data governance program), and automated integration (with DRM as the hub).

Are any of these components effective in their own right? Certainly. Each area adds value in its own right and can be implemented standalone. But when all three components are implemented in conjunction, the whole is definitely greater than the sum of the parts. Each component presents its own set of challenges and requires close collaboration with both technical and business personnel at a customer. And executive sponsorship and buy-in is absolutely vital to managing and overcoming the inevitable change management challenges. It ain’t easy, but like the saying goes, nothing worthwhile ever is, right?

I’d love to hear your thoughts on this topic along with any best practices, lessons learned, or battle scars earned along the way. Feel free to connect with me on LinkedIn or Twitter.

Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning

In the last post, New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons, we discussed how ARCS sets you up to easily add on additional scope to your existing application and scale your solution. However, not all changes are brand new. Clients are often concerned with being pigeonholed based on their “Day 1” decisions. A common question I am asked during a design session is “Can I manually enter this reconciliation today, but create new feeds to automatically load the data tomorrow?” The answer is a resounding YES, and it provides clear added value to the next phase of any ARCS (or ARM) project. It can be a viable project strategy to set up reconciliations using an Account Analysis format on “Day 1” and change to a Balance Comparison format when automated data loads are built on “Day 100.”

Modifications in ARCS 1

[Screenshot 5a: Reconciliation 100-1000 is setup with a Balance Comparison format in Sep 2017.*]

Modifications in ARCS 2

[Screenshot 5b: The previous period’s reconciliation can be viewed in the Prior Reconciliations tab.*]

Modifications in ARCS 3

[Screenshot 5c: Reconciliation 100-1000 was previously setup with an Account Analysis format in Aug 2017. The format of a profile can be changed while maintaining the Prior Reconciliations link.*]

Depending on how this change is made, it is even possible to keep the modified reconciliation “linked” to the previously completed reconciliations even though the Format has changed, such as in Screenshots 5a – 5c. The ease with which ARCS allows you to change Reconciliation Methods (via Formats) gives you the flexibility to not bite off more than you can chew in the beginning of a project.

Changing Reconciliation Methods is often related to new integrations. Moving from the manual “fat fingering” of data to directly loading general ledger and sub ledger balances through Financial Data Management Enterprise Edition (FDMEE) or Data Management combined with the inbuilt auto-reconciliation tools can bring a “quality of life” change for end users as well as added confidence in the data’s integrity. It is always a best practice to pull data from the source. Creating the integration from the general ledger is typically part of the initial scope. The usual candidates for building additional feeds after the first project phase are the sub ledgers related to fixed assets, accounts receivables, and accounts payables. However, the most “bang for your buck” as it relates to what integrations to build depends on your line of business and specific company requirements.*

*Note that adding multiple general ledger feeds introduces additional complexities beyond the scope of this article. Please consult with your Oracle partner before adding to your application.

In some cases, the greatest efficiencies to your existing reconciliation process are gained in utilizing the power of ARCS Transaction Matching. This module is better suited to handle massive data volumes at a transactional level. As an example, instead of performing just a reconciliation of the balance sheet’s intercompany balances in ARCS Reconciliation Compliance at the end of the month, an enhancement to this process could be to perform the daily matching process in ARCS Transaction Matching to clear up issues in real time as they arise. This simplifies the month end’s reconciliation. ARCS Transaction Matching is a powerful supplement to an existing reconciliation system and continues to receive special attention from Oracle as seen with the major release of new functionality in Patch 1805.

Just as there are many ways your company can change, ARCS can be modified to match your needs even in a live application. However, sometimes changes are more fundamental than a bit of tweaking such as in an acquisition or the introduction of a new, company-wide general ledger. Or, perhaps, you are just not satisfied with the solution design. Join me in the next post as we discuss the dangerous topic of redesign in ARCS – what is possible…and what it costs.

In the next post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, learn how redesign IS possible in ARCS.

*Screenshots taken from the patch 1806 release.