Catching up with EDMCS

Last time, in the Wonderful World of Enterprise Data Management Cloud Service (EDMCS), we discussed initial impressions of this exciting new Oracle Cloud product and highlighted some early functionality enhancements.

But do you realize how much functionality has been added to EDMCS since its initial release in January 2018? The short list is impressive:

  • Enhanced node alignment/location in side-by-side viewpoint compares
  • Exposed REST API operations including dimension imports/exports and request creation/submission
  • Enhanced searching across members (name and descriptions) and data objects
  • Lifecycle management of data objects
  • Incremental imports
  • Viewpoint download from selected node

Furthermore, in the areas of REST API and metadata integrations, Tony Scalese, Vice President at Edgewater Ranzal and Oracle ACE, has written several blogs. These posts were written from the perspective of hands-on, real-world experience by working with one of three customers accepted into the Oracle EDMCS Early Adopter program. The blog posts include:

In this post, I’d like to highlight another feature that was recently added to EDMCS: enhanced request load files.

Enhanced Request Load Files

In the initial release, EDMCS provides a mechanism to perform bulk updates to EDMCS hierarchies – the Excel request load file. While the feature immediately had some advantages over its distant cousin in Data Relationship Manager (DRM) (action scripts), there were limitations. Primarily, EDMCS would only recognize the first tab or worksheet in an Excel file.

Well that has been fixed! Request load files can now contain multiple worksheets, and EDMCS will recognize all of them (provided the worksheet names match your viewpoint names of course). Additionally, EDMCS will automatically select all valid worksheets to load into EDMCS when loading a request file. This makes it very easy to download viewpoints to Excel and build a request file containing updates for multiple viewpoints to bulk upload at one time.

This also means you need to be careful! Since EDMCS auto-selects any matching worksheet name, if you were not paying attention, you could accidentally load outdated requests from a worksheet. But you can still delete any unwanted request items prior to submitting the request, if you catch them first.

Catching Up on EDMCS 1

While you could always load multiple request files in a single request since the initial release of EDMCS, this feature is a nice usability and productivity enhancement. It works great for situations such as adding a node to a primary hierarchy/viewpoint and inserting it into an alternate hierarchy/viewpoint, all from the same request.

Conclusion (and a teaser)

While EDMCS is the new kid on the block in the Oracle EPM cloud space, it’s exciting to see how it’s quickly closing the gap with new functionality being added regularly! REST API operations, enhanced request files, and the other enhancements mentioned above show how far EDMCS has come in just 6 months.

But wait, there’s more!

The 18.07 release of EDMCS looks to be a HUGE release chock full of new features, including one I am especially excited for: subscriptions!

Look for more blog posts coming soon to discuss the subscription functionality and utilizing EDMCS for a Profitability and Cost Management Cloud Service (PCMCS) implementation.

Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up

We talked about adding new scope in New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons and modifying your application inside (i.e. changing reconciliation methods) and outside of ARCS (i.e. new data feeds) in Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning.

Today, we’re going to tear it down and rebuild from the ground up.

Let me start with this:  redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

ARCS From the Ground Up 1[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

ARCS From the Ground Up 2[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

ARCS From the Ground Up 3[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

ARCS From the Ground Up 4

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

ARCS From the Ground Up 5[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Join me in the last post as we wrap up this ARCS modularity series with a crowd pleaser: on to automation!

*Screenshots taken from the patch 1806 release.

The Data Governance Triple Crown

A few weeks ago, those who follow horse racing witnessed a historic event. The race horse Justified captured the Triple Crown by winning the Belmont Stakes following earlier victories in the Kentucky Derby and Preakness Stakes. Justified became only the 13th horse in history to capture the Triple Crown, and the second horse to do so in the last 4 years (American Pharoah captured the honor in 2015). Interesting side note: both Justified and American Pharoah were trained by Bob Baffert. Why does that matter? Because he’s a fellow Arizonan native and University of Arizona alumnus, that’s why! Bear Down!

While it may be a stretch, the concept of a “triple crown” of sorts has been on my mind recently as it relates to recent Oracle Enterprise Performance Management (EPM) projects I’ve been working on involving Oracle Data Relationship Management (DRM) and Data Relationship Governance (DRG). Many people are familiar with the DRG module of the DRM product, but when the tool is coupled with two other critical components, you are well on your way to capturing the Data Governance Triple Crown.

1.    Tool – Data Relationship Governance

As you may know, DRG is a module of the DRM product and provides a governance framework for maintaining your DRM master data. DRG includes functionality such as workflows, approvals, email notifications, and separation of duties (to prevent someone from approving his own request). Workflows are often structured around dimension maintenance and may include requests like “Add Account,” “Update Account,” or “Move Account.” The workflow then guides the requester to select tasks and complete fields on a data entry form. Once submitted, the request enters optional enrichment stages where additional detail and context is added to the request before finally being committed and updating the relevant DRM structures.

Here are just a few of the key features in DRG:

  • Requests can be entered interactively or via bulk upload files
  • Documents (such as supporting request documentation, emails, or policies) can be attached to requests
  • Comments/supporting narrative can be included
  • Requests can be pushed back to a prior stage, approved, or rejected
  • Request can generate email notifications to approvers and/or participants in a workflow requests
  • Requests can include validations, calculated fields, and conditional criteria to enter or bypass specific stages in the workflow

While I could go on and on about DRG, I’ve noticed a DRG implementation is most effective when paired with two other components.

2.    Process – Data Governance Program

In my experience, DRG implementations are most successful when bundled into a broader data governance program. Data governance programs bring together the Tool (DRG), the People (data stewards, data specialists, data governance council), and the Process (process flows, metrics, and standards).

Key facets to an effective data governance program include:

  • Executive sponsorship
  • Data Governance Council
  • Clear Roles and Responsibilities
  • Standards (metrics, definitions, process flows)
  • Authority and Accountability

Data governance programs are not easy! The change management aspect to implementing effective data governance cannot be underestimated. There will be natural resistance, pushback, and challenges to any type of change, and data governance initiatives are no exception. Data governance implementations require patience and perseverance, and at times, even a bit of the “carrot and stick” approach. As a result, we have seen the following steps as crucial to getting your data governance program off the ground:

    1. Define Charter Team and Responsibilities
    2. Define the Mission Statement
    3. Define the High-Level Scope
    4. Define the Terminology and Standards
    5. Define the Current State Overview
    6. Define the Future State Vision
    7. Define the Draft Phased Approach
    8. Prepare the Project Charter
    9. Present the Project Charter for Executive Approval
    10. Ensure Executive Support

While there is much more content to dive into on a data governance program that is beyond the scope of this blog, I hope you appreciate the importance of People and Process in a data governance initiative and do not focus only on the Tool.

3.    Integration – DRM to External Systems

The third and final component to effective data governance, after the Tool and Process, is integration to external systems. This allows DRM to truly become the master data hub in your company’s eco-system and systematically push master data (which could include trees/hierarchies, base members, mappings, or all of the above) to both upstream and downstream systems.

By leveraging DRM’s robust integration capabilities and adding in some custom SQL or ETL integration as needed, DRM can produce master data in various forms (flat files, SQL tables, web services, external commits) for consumption by external applications. And these integrations can be run on-demand or scheduled.

Summary

So there you have it. Three critical components to effective data governance: a good tool (DRG), a robust process (data governance program), and automated integration (with DRM as the hub).

Are any of these components effective in their own right? Certainly. Each area adds value in its own right and can be implemented standalone. But when all three components are implemented in conjunction, the whole is definitely greater than the sum of the parts. Each component presents its own set of challenges and requires close collaboration with both technical and business personnel at a customer. And executive sponsorship and buy-in is absolutely vital to managing and overcoming the inevitable change management challenges. It ain’t easy, but like the saying goes, nothing worthwhile ever is, right?

I’d love to hear your thoughts on this topic along with any best practices, lessons learned, or battle scars earned along the way. Feel free to connect with me on LinkedIn or Twitter.

Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning

In the last post, New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons, we discussed how ARCS sets you up to easily add on additional scope to your existing application and scale your solution. However, not all changes are brand new. Clients are often concerned with being pigeonholed based on their “Day 1” decisions. A common question I am asked during a design session is “Can I manually enter this reconciliation today, but create new feeds to automatically load the data tomorrow?” The answer is a resounding YES, and it provides clear added value to the next phase of any ARCS (or ARM) project. It can be a viable project strategy to set up reconciliations using an Account Analysis format on “Day 1” and change to a Balance Comparison format when automated data loads are built on “Day 100.”

Modifications in ARCS 1

[Screenshot 5a: Reconciliation 100-1000 is setup with a Balance Comparison format in Sep 2017.*]

Modifications in ARCS 2

[Screenshot 5b: The previous period’s reconciliation can be viewed in the Prior Reconciliations tab.*]

Modifications in ARCS 3

[Screenshot 5c: Reconciliation 100-1000 was previously setup with an Account Analysis format in Aug 2017. The format of a profile can be changed while maintaining the Prior Reconciliations link.*]

Depending on how this change is made, it is even possible to keep the modified reconciliation “linked” to the previously completed reconciliations even though the Format has changed, such as in Screenshots 5a – 5c. The ease with which ARCS allows you to change Reconciliation Methods (via Formats) gives you the flexibility to not bite off more than you can chew in the beginning of a project.

Changing Reconciliation Methods is often related to new integrations. Moving from the manual “fat fingering” of data to directly loading general ledger and sub ledger balances through Financial Data Management Enterprise Edition (FDMEE) or Data Management combined with the inbuilt auto-reconciliation tools can bring a “quality of life” change for end users as well as added confidence in the data’s integrity. It is always a best practice to pull data from the source. Creating the integration from the general ledger is typically part of the initial scope. The usual candidates for building additional feeds after the first project phase are the sub ledgers related to fixed assets, accounts receivables, and accounts payables. However, the most “bang for your buck” as it relates to what integrations to build depends on your line of business and specific company requirements.*

*Note that adding multiple general ledger feeds introduces additional complexities beyond the scope of this article. Please consult with your Oracle partner before adding to your application.

In some cases, the greatest efficiencies to your existing reconciliation process are gained in utilizing the power of ARCS Transaction Matching. This module is better suited to handle massive data volumes at a transactional level. As an example, instead of performing just a reconciliation of the balance sheet’s intercompany balances in ARCS Reconciliation Compliance at the end of the month, an enhancement to this process could be to perform the daily matching process in ARCS Transaction Matching to clear up issues in real time as they arise. This simplifies the month end’s reconciliation. ARCS Transaction Matching is a powerful supplement to an existing reconciliation system and continues to receive special attention from Oracle as seen with the major release of new functionality in Patch 1805.

Just as there are many ways your company can change, ARCS can be modified to match your needs even in a live application. However, sometimes changes are more fundamental than a bit of tweaking such as in an acquisition or the introduction of a new, company-wide general ledger. Or, perhaps, you are just not satisfied with the solution design. Join me in the next post as we discuss the dangerous topic of redesign in ARCS – what is possible…and what it costs.

In the next post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, learn how redesign IS possible in ARCS.

*Screenshots taken from the patch 1806 release.

New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons

This post follows last week’s post Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100.”

Out-of-the-box, ARCS makes it easy to “oh, and this!” when adding new scope. The obvious example is monthly maintenance. Reconciliation Administrators and Power Users can build new Profiles to deploy for future months (or even the current month) with relative ease. With the “Copy” feature, previously created Profiles can serve as ready-to-use templates and reduce the manual effort involved in building a Profile from scratch.

New Scope in Account Reconciliation Cloud Service (ARCS) - Add-Ons 1

[Screenshot 1: The Copy function from the Actions drop-down list can be used to duplicate existing Profiles*]

Copying existing Profiles, as seen in Screenshot 1, is intuitive, built-in functionality. This makes ARCS “Quick Starts” a popular project option when tight on a budget – the Partner will be contracted to create a limited subset of Profiles and the Client can then use these as a starting point to build out the rest, saving on the Build Phase effort.

Another common post-project add are Custom Attributes. As companies become more familiar with how their end users utilize the tool, new Custom Attributes can be included for reporting purposes (such as filtering or sorting in dashboards), providing information, or collecting feedback. Beyond the three system attributes of Process, Account Type, and Risk Rating, some typical Custom Attributes include source system names, supplemental detail such as cost center or department, or even more dynamic fields such as auto-populating metadata descriptions. Furthermore, where these are placed within a reconciliation changes the nature of what detail is being provided or collected. Custom Attributes can be placed at a reconciliation’s summary level, on each individual transaction, and even on the specific Action Plans within each transaction. Additionally, these can be inherited from a Format or set for individual Profiles. What information is useful or relevant to end users will change depending on the granularity.

New Scope in Account Reconciliation Cloud Service (ARCS) - Add-Ons 2[Screenshot 2: Custom Attribute on the Summary tab*]

New Scope in Account Reconciliation Cloud Service (ARCS) - Add-Ons 3[Screenshot 3: Custom Attribute on a Transaction*]

New Scope in Account Reconciliation Cloud Service (ARCS) - Add-Ons 4[Screenshot 4: Custom Attribute on an Action Plan*]

The variety of locations within the reconciliation to place these Custom Attributes, as seen in Screenshots 2 – 4, and the ease at which these can be added provides your company with the flexibility to determine ‘what’ and ‘where’ information should be presented.

ARCS provides a plethora of tools to grow the application with your company and add-on to your “Day 1” implementation. But what if you like what you have built, and just want to tweak it?  Perhaps you want to move from “fat fingering” to fully integrating with your ERP source systems? The next post, Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning, discusses how ARCS can be modularly modified, keeping what you have…but better.

*Screenshots taken from the patch 1806 release.

Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100”

Modularity. My initial experience with this concept was during the build of my first computer. There is a great, omnipresent dread that consumes people who share this hobby – imagine this scenario (or nightmare rather!): you have just invested significant time, energy, and finances to create the perfect machine – only to have it rendered obsolete the next month by changing technology that is incompatible with your swanky new rig! The warring decision of function today versus future proofing for tomorrow is a constant struggle for all tech lovers (or tech survivors, as the case may be). So when a product is able to overcome this dilemma, it’s got my attention.

ARCS Modularity 1a

In my post A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS), I discussed the modular nature of ARCS as one of the key pillars that made the product an easy recommendation as a first step into the Cloud. For new projects, this is a comforting “safety cushion.” For existing applications, it means you are not stuck with what you have. Push your product to evolve with your needs and ensure that you are eking out every drop of value from your investment.

With ever-changing requirements, it is critical to know what tools are at your disposal. Some changes are straightforward; others…not so much. In this upcoming series of blog posts, we will discuss what it means for ARCS to be a modular solution and explore the four main ways in which this manifests:

  1. New scope
  2. Modifications
  3. Redesign
  4. Automation.

Over the next few weeks, we will be tweaking, tuning, tearing down, and putting the application back together to see how there can be no mistakes with modularity.

View the next post in this series:  New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons

An Exploration of the EDMCS REST API

Recently my team and I had the opportunity to implement Oracle’s newest offering – Enterprise Data Management Cloud Service (EDMCS). EDMCS for those of you who are not familiar provides a cloud-based solution for managing master data (also referred to as metadata) across the organization.  Some like to refer to EDMCS as Data Relationship Manager (DRM) in the Cloud, but the truth is, EDMCS is not DRM in the Cloud.

EDMCS is a completely new vision of what master data management can and should be. The architect of this new cloud offering is the same person who founded Razza Solutions which was the company that developed the product now known as DRM.  That is important to know because it ensures that the best of what DRM has to offer is brought forward.  But, more importantly, it ensures that the learnings and wish list of capabilities that DRM should have are in the forefront of the developers’ minds.

Ok, now let’s get back to fun stuff. In the 18.05 patch for EDMCS, the REST API (v1) was exposed for public usage.  The documentation for the REST API can be found here:

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/rest-endpoints.html

As I highlighted in the previous post Troubleshooting Cloud Data Management Metadata Load Errors, I had developed an automation routine to upload EDMCS extracts to both PBCS and FCCS using FDMEE and Cloud Data Management.  We had been eagerly awaiting the REST API for EDMCS to finalize this automation routine and provide a true end-to-end process that can be scheduled or initialized via a single action.

Let’s take a quick look back at the automation routine developed for this customer. After the metadata has been exported to a flat file from EDMCS, the automation would upload a copy to the PBCS and FCCS pods, launch Cloud Data Management data load rules which would process the EDMCS metadata extracts, run a restructure of the database after all dimensions had been loaded, and then send a status email alerting the administrator of the result.  While elegant, I considered this to be incomplete.

Automation, in my view, is a process that can be executed without user interaction. While an automation routine certainly has parameters that must be generally maintained, once those parameters are set/updated, the automation cycle should not be dependent on user input or action.  In the aforementioned solution, we were beholden to the fact that EDMCS exports had to be run interactively; however, with the introduction of the publicly exposed REST API in the 18.05 EDMCS patch, we are now able to automate the extract of metadata from EDMCS.  That means we can finally complete our fully automated, end-to-end solution for loading metadata.  Let’s review the EDMCS REST API and how we did it.

The REST API for EDMCS is structured similar to other Oracle EPM REST APIs. By this, I mean that multiple REST commands may need to be executed to achieve a functional result.  For example, when executing a Cloud Data Management data load rule via the Data Management REST API, the actual execution of the data load rule is handled by a POST call to the jobs function with the required payload (e.g. DLR name, start period, etc.).  This call is just one portion of a functional requirement.  To achieve an actual data load, a file may need to be uploaded to the cloud, the data load rule initialized, and then the status of the data load rule be retrieved.  To achieve this functional result, three unique REST API executions would need to occur.

To export metadata from EDMCS to a flat file using the REST API, the following needs to be executed:

  1. Get the dimension information for the EDMCS application from which metadata will be exported
  2. Execute an export of the dimension(s)
  3. Determine the status of export
  4. Download the export to a flat file

Let’s explore each of these in a little more detail. First, we need to get the dimension IDs for the application from which we will be downloading metadata.  This is accessed from the applications function.

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-applications-get.html

When executing this function, the JSON object return includes all applications that exist in EDMCS (including those archived). So the JSON needs to be iterated to find the record that relates to the application from which metadata needs to be exported.  In this case, the name of the application is unique and can be used to locate the appropriate record.  Next, we need to query the JSON object to get the actual dimension id (circled in red).  The dimension ID is used in subsequent calls to actually export the dimension.

Great, now we have the dimension ID. Next, we need to execute the REST API call to export the dimension.

Automated Metadata 1.docx

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-dimensions-dimensionid-export-download-post.html

You will notice that when you access this POST method, the dimension ID from the previous step is required:

/epm/rest/v1/dimensions/{dimensionId}/export/download

The JSON object returned from this execution contains minimal information. It simply provides the URL to the next required REST API execution which will provide the status of the execution.

Automated Metadata 2.docx

With this information, we can check the status of the export using the jobRuns function

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-jobruns-jobrunid-get.html

The JSON object returned here provides us the status of the export invoked in the prior step (in yellow) as well as a URL to the actual file to download which is our last step in the process.

Automated Metadata 3.docx

Once the export job is complete, the files can be streamed using the URL provided by the REST execution in the prior step.

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-files-temp-fileid-get.html

And there you have it, a fully automated solution to download metadata to flat files from EDMCS. Those files are then provided to the existing automation routine and our end-to-end process is truly complete.

And for my next trick…let’s explore some of the different REST API tools that are available to help you in your journey with the EPM REST APIs.

 

Troubleshooting Cloud Data Management Metadata Load Errors

In my last post, I highlighted a solution that was recently deployed for a customer that leveraged Enterprise Data Management Cloud Service (EDMCS), Financial Data Quality Management Enterprise Edition (FDMEE), and Cloud Data Management (CDM) to create an automated metadata integration process for both Planning and Budgeting Cloud Service (PBCS) and Financial Close and Consolidation Cloud Service (FCCS). In this post, I want to take a bit of a deeper dive into the technical build and share some important learnings.

Cloud Data Management introduced the ability to load metadata from a flat file to the Oracle EPM Cloud Services in the 17.11 patch. This functionality provides customers the ability to leverage a common platform for loading both data and metadata within the Cloud.  Equally important, CDM allows metadata to be transformed using its familiar mapping functionality.

As noted, this customer deployed both PBCS and FCCS. Within the PBCS application, four plan types are active while FCCS has the default two plan types.  A design decision was made for EDMCS to create a single custom application type that would store the metadata for both cloud applications.  This decision was not reached without significant thought as well as counsel with Oracle development.  While the pros and cons of the decision are outside the scope of this post, the choice to use a custom application registration in EDMCS ensured that metadata was input a single time but still fed to both cloud applications.  As a result of the EDMCS design decision, a single metadata file (per dimension) was supplied with properties necessary to support each plan type.

CDM leverages its 23 “dimensions” to store metadata information for processing. Exactly like data, metadata is imported using an import format into the CDM relational repository.  Each field from a metadata file is aligned to a CDM dimension field.  The CDM Account dimension always represents the target application member name and the CDM Entity dimension represents the parent of the member.  All other fields can be aligned to any of the remaining 21 dimensions.  CDM Attribute dimensions can be utilized in the import and mapping process but are not available for exporting to the cloud application.  This becomes an important constraint especially in a multi-plan type deployment.  These 21 fields can be used to store any of the properties required to successfully load metadata to the target plan type.

Let’s consider this case study for a moment. The PBCS application has four plan types.  If a process were to be built to load all plan types from a single CDM data load rule, then we would not be able to have more than five plan type specific attributes or properties because we would not have enough CDM fields/dimensions to store the relevant information.  This leads to an important design approach.  Instead of a single CDM data load rule to load all plan types, a data load rule was created for each plan type.  This greatly increased the number of metadata properties and attributes that could be loaded by CDM and ensured that future growth could be accommodated without a redesign of the integration process.

It is important to understand that CDM utilizes the Planning Outline Load Utility (OLU) to actually perform the metadata load to the cloud application. The OLU loads metadata using merge (yes Planning experts, I realize that I am not discovering fire) which is important to understand especially when processing multiple metadata loads for a single application.  When loading metadata, there are certain properties that are Application level.  I like to think of these as being global.  Additionally, there are plan type specific attributes that can align (or not align) to the application level value/setting.  I like to think of these as local.

Why is this important? Well when loading a metadata file, if certain global properties are excluded from the metadata load file, the local properties (if specified) are utilized to default the global properties. Since metadata is loading using merge, this only becomes problematic when a new member is being added to the outline.

In this particular example, an alternate hierarchy with shared members was specified in one of the plan types. The storage property of the alternates was obviously set as Shared; however, when attempting the metadata load, the following error was encountered:

A Base Member cannot be changed to a Shared Member.

After much investigation (details to follow), I discovered that the global property should also be included in the metadata load.

While CDM utilizes the OLU to load metadata, it does not provide as much verbosity in the error information as the PBCS web interface (which also uses OLU) when loading metadata. Below is an example of the error in the CDM process log.  As a tangent, I’d love to check the logs without needing to open a Service Request.  Maybe Oracle will build an enhancement that allows that in the future (hint, hint, wink, wink to my friends at Oracle).

Baha Mar - Error Handling 1

So where do I go from here? Well, what do we know about CDM loading metadata to the cloud application?  We know that CDM uses the OLU to load a flat file generated by CDM.  Bingo!  The metadata file output by CDM is a good starting point.  That file is in the Outbox of the CDM application and can be downloaded in several different ways – CDM Import process (get creative folks), CDM process details, or EPM Automate.  Now we have the metadata file and can test to determine if the error is caused by CDM or the metadata itself.  It’s all about ruling out variables.  So, we take the metadata file and import it manually within the PBCS web interface and are able to replicate the error.  But now we have an important new data point – the line number from the metadata file that is causing the error.

Baha Mar - Error Handling 2

Now that we have actionable information, we can review each property and start isolating and eliminating different variables. We determined that this error was only occurring for new alternate hierarchy parents being added to the outline.  As a test, we added the global storage property and voila, the metadata load completed successfully.  Face palm!

Maybe this would have been obvious to folks with a lot of Planning experience, but there are plenty of folks learning the intricacies of Planning and Essbase (including our friends converting from HFM to FCCS), so I wanted to share a lesson learned in my journey of metadata integration using CDM.

CDM functionality for metadata represents two of the three primary operations of ETL. In my next post, we’ll dive deeper into how the extract component of ETL was accomplished to provide a seamless end- to-end ETL solution for metadata.

Patch Today! Don’t Delay! Best Reasons to Upgrade Your EPM System

Putting off that upgrade to 11.1.2.4? Cloud not whetting your appetite for patches? Patch today. Don’t delay!

“But we’re going to the Oracle EPM Cloud soon!” you say. You should maintain your patches anyway. With the recurring maintenance, updates, and patches available to the EPM Cloud products, expect the on-premise patches to contain similar updates. An upcoming conversion to Oracle EPM Cloud products may benefit from running the latest on-premise codelines.

If you have an existing on-premise installation of Oracle EPM System, be sure to maintain the latest EPM System Patch Set Updates every 3 to 6 months. Here are a few great reasons why:

New Features

Patches often contain reactive bug resolutions to known issues; however, we have also been seeing new functionality released in patches for 11.1.2.4.

You Own It

You already pay for it! As long as your Oracle Maintenance contract is current (very likely if you are reading this article), you’re already paying for access to patches. Why leave them unapplied? You are running legacy code when the latest version costs you nothing additional. Windows XP was a great OS, but we’ve got to keep up with the times.

Supportability

Maximize your success by reducing time to resolution on your issues. Should you submit a support request to the vendor, the first line of response to a ticket is often about current patch levels. Once provided, the subsequent reply frequently contains a recommendation to apply the latest Patch Set Updates (PSUs) to see if that fixes the issue. Annoying? Perhaps you’re a pessimist. Or have just been remiss with your patching. I’ve certainly changed my mind on the matter and can better side with them. The reason? Supporting the latest codeline is more efficient and effective for the vendor. Your problem may have already been addressed in a code fix. They can better and more quickly support you if they are troubleshooting the current release instead of legacy code.

Stability

In older versions, patches seem to come out on a haphazard schedule. Over the last few years, Oracle has regularly streamlined EPM System patch releases – typically releasing Patch Set Updates quarterly, which are different from Patch Set Exceptions. PSUs are a grouping of PSEs or fewer, more significant PSEs that get regression tested collectively by the vendor and are released under a singular patch. We’ve gained a much higher degree of confidence with this bulk model of PSUs. The organization of release schedule and bug fixes is more dependable and greatly appreciated. The PSU model provides less ambiguity on which patches to apply and brings greater stability to all customers.

Upgrade

Maybe it’s bigger than patching. Are you not on version 11.1.2.4 of your EPM System? Compliance with Enterprise IT requirements around browser version and operating systems is often impetus for an upgrade. But there are also plenty of compelling new software features, functions, conventions, and improvements in 11.1.2.4.

Operating System (OS) support for current platforms maximizes your investment and supportability. When 2.4 came out, many customers were forced to upgrade their older systems for compliance with the latest enterprise standards for server operating systems and/or client browser versions. Instead of being faced with an IT mandated technology upgrade, an upgrade on the business’ schedule is preferred.

What Kind of Effort is Involved?

The comprehensive effort to bring a simple deployment (3-4 servers, no High Availability) up to the latest PSUs is typically less than a day per environment. That includes an analysis of existing patches, the patching itself as well as any prerequisites, and a post-check verification to confirm all patches applied are properly indicated in the corresponding inventories.

An initial patch application may take a little bit longer because there are often common prerequisites to address that don’t have to be handled with subsequent patching. There are also considerations like bringing WebLogic up to the latest patch level, as well as one-offs like the fixes for the Equifax-discovered vulnerabilities, that don’t happen frequently. Once you’ve got a solid base of primary critical patching, additional patching events are typically shorter.

Patching can be tricky. Documentation can often be ambiguous, whether it be an unintended omission or even assumed knowledge based on an implied experience or understanding of the product. Sometimes post-install instructions get skipped or SQL statements do not get executed properly as part of the patch. Less experienced resources typically only patch the EPMSystem11R1 Oracle Home; however, did you know that Oracle’s ADF framework also has an Opatch directory under oracle_common? Possibly because those are often prereqs. But what about Oracle Data Integrator (ODI) and Oracle HTTP Server (OHS)? They also may have applicable OPatches. Who knows what you’re missing? We do! Let’s button it up.

Contact us for more details.

Laser Tag for Cloud Analytics

A friendly game of laser tag between out-of-shape technology consultants became a small gold mine of analytics simply by combining the power of Essbase and the built-in data visualization features of Oracle Analytics Cloud (OAC)! As a “team building activity,” a group of Edgewater Ranzal consultants recently decided to play a thrilling children’s game of laser tag one evening.  At the finale of the four-game match, we were each handed a score card with individual match results and other details such as who we hit, who hit us, where we got hit, and hit percentage based on shots taken.  Winners gained immediate bragging rights, but for the losers, it served as proof that age really isn’t just a number (my lungs, my poor collapsing lungs).  BUT…we quickly decided that it would be fun to import this data into OAC to gain further insight about what just happened.

Analyzing Results in Essbase

Using Smart View, a comprehensive tool for accessing and integrating EPM and BI content from Microsoft Office products, we sent the data straight to Essbase (included in the OAC platform) from Excel, where we could then apply the power of Essbase to slice the data by dimensions and add calculated metrics. The dimensions selected were:

  • Metrics (e.g. score, hit %)
  • Game (e.g.Game 1, Game 2, Total),
  • Player
  • Player Hit
  • Target (e.g. front, back, shoulder)
  • Bonus (e.g. double points, rapid fire)

With Essbase’s rollup capability, dimensions can be sliced by any one item or at a “Total” level. For example, the Player dimension’s structure looks like this:

  • Players
    • Red Team
      • Red Team Player 1
      • Red Team Player 2
    • Blue Team
      • Blue Team Player 1
      • Blue Team Player 2

This provides instant score results by player, by “Total” team, or by everybody. Combined with another dimension like Player Hit, it’s easy to examine details like number of times an individual player hit another player or another team in total. You can drill in to Red Team Player 1 shot Blue Team or Red Team Player 1 shot Blue Team Player 1 to see how many times a player shot an individual player. A simple Smart View retrieval along the Player dimension shows scores by player and team, but the data is a little raw. On a simple data set such as this, it’s easy to pick out details, but with OAC, there is another way!

Laser Tag 1

Even More Insight with Oracle Analytics Cloud (OAC)

Using the data visualization features of OAC, it’s easy to build queries against the OAC Essbase cube to gain interesting insight into this friendly folly and, more importantly, answer the questions everybody had: what was the rate of friendly fire and who shot who? Building an initial pivot chart by simply dragging and dropping Essbase dimensions onto the canvas including the game number, player, score, and coloring by our Essbase metric “Bad Hits” (a calculated metric built in Essbase to show when a player hit a teammate), we discovered who had poor aim…

Laser Tag 2

Dan from the Blue team immediately stands out as does Kevin and Wayne from the Red team!  This points us in the right direction, but we can easily toggle to another visualization that might offer even more insight into what went on. Using a couple of sunburst type data visualizations, we can quickly tie who was shooting and who was getting hit – filtered by the same team and then weight by the score (and also color code it by team color).

Laser Tag 3

It appears that Wayne and Kevin from the Red Team are pretty good at hitting teammates, but it is also now easy to conclude that Wayne really has it out for Kevin while Kevin is an equal opportunity shoot-you-in-the-back kind of teammate!

Reimagining the data as a scatter plot gives us a better look at the value of a player in relation to friendly fire. By dragging the “Score” Essbase metric into the size field of the chart, correlations are discovered between friendly fire and hits to the other team.  While Wayne might have had the highest number of friendly fire incidents, he also had the second highest score for the Red team.  The data shows visually that Kevin had quite a few friendly fire incidents, but he didn’t score as much (it also shows results that allow one to infer that Seema was probably hiding in a corner throughout the entire game, but that’s a different blog post).

Laser Tag 4

What Can You Imagine with the Data Driving Your Business?

By combining the power of Essbase with the drag-and-drop analytic capabilities of Oracle Analytics Cloud, discovering trends and gaining insight is very easy and intuitive. Even in a simple and fun game of laser tag, results and trends are found that aren’t immediately obvious in Excel alone.  Imagine what it can do with the data that is driving your business!

With Oracle giving credits for a 30-day trial, getting started today with OAC is easy. Contact us for help!