We’ve Got You Covered: Producing Flat-File Extracts out of Cloud Data Management

As an EPM Administrator or Implementation Specialist, we have all had that moment when someone comes to us and asks for the dreaded extract out of an Enterprise Performance Management (EPM) application.  Depending on the system combination (Hyperion Financial Management (HFM), Planning, etc.) and the file layout specifications, this can be tricky.  Layer in the concept of a Cloud application, and things have now gotten real!

In an on-premise installation of Financial Data Quality Management Enterprise Edition (FDMEE), we could use scripting within a “custom application” to build an end-to-end approach for delivering a flat-file extract for third party consumption.  With the release of version 19.06, Oracle has further enhanced this concept and brought it to Cloud Data Management (CDM). The Cloud application now provides the ability to design and produce a text file for downstream consumption in Oracle Cloud products (PCMCS, EPBCS, FCCS).  WHOA!

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 1

The Setup

I recently busted out the functionality and this is what I have discovered:  It’s crazy simple!

  1. Create a text file with your defined headers

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 2

  1. Create a target application and set your settings

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 3

3.  Create an import format

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 4

  1. Create a Location & DLR

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 5

  1. Create the desired Maps
  2. Run the Data Load Rule

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 6

It is that simple!  CDM produces a file that looks similar to this:

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 7

It can PIVOT!?

As crazy as it sounds, it can even pivot the data!  I find this extremely helpful as it is a common request to have twelve months of data in column format.  CDM leverages the PIVOT command of the database for this process and creates the pivot file with ease and efficiency.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 8

What does it do behind the scenes?

Behind the scenes, CDM appears to run a standard import and validation of the data, but it leverages a different set of workflow instructions.  The process does not consider unmapped items which are left as blank fields in the output.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 9

It also does not permanently store any data in the CDM repository unless you want it to.  The documentation can be easily misinterpreted because you will see “fish,” but no data is stored (more on this later).  A quick review of the process details log shows that all the work is done in the “tDataSeg_T” table.  This is the “temporary working” table of Data Management, and it is cleared after/before each new run for optimal performance.  Since the data is never moved out of this table, it is never retained.  Even the export process that produces the output file pulls from the temporary table.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 10

A review of the Documentation shows that there are 3 main supported types for processing:

  • Simple (the option selected here) – Does all the work in tDataSeg_T and does not retain any data or archive maps. Although, be warned, it does retain the process details and “fish” status which can look a bit strange.

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 11

  • Full No Archive – Data is retained in tDataSeg only after the import step. Data is deleted after the export.
  • Full – All data is retained. Full process flows are supported (check rules, drill down, etc.).

That’s great, but my file is stuck in the Cloud!

Not really…let’s think this through in a workflow process.  When using Cloud applications, we might have an automation wrapper or a larger workflow process.  If not, we are using the general user interface (GUI), and we can access the file in two ways:

  1. Data File Explorer
  2. Process Details -> Download File option

Wayne Paffhausen - Weve Got You Covered - 7-26-19 Image 12

If we are using a more automated approach, we just need to include additional steps to:

  1. Monitor the data load rule for completion
  2. Verify the status of that completion (do not proceed forward if it failed; do something different)
  3. Confirm that the file was created
  4. Download the file that was created
  5. Continue the automated routine

In Summary…

It is simple to produce a file using Data Management in the EPM Cloud products.  This is a welcomed change that further enhances the product lines by delivering on client needs.  This allows us to build a simplified Cloud solution that was previously only on-premise.

If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

Enterprise Data Management: Version 19.07 – Top 3 Added Features

The latest 19.07 release of Enterprise Data Management (EDM) contains a boatload, a plethora – no, make that oodles of new features (time to put Merriam-Webster away). The full release notes are documented here: EDM July 2019 Update

In this blog, I’d like to dive into 3 of my favorite features (so far) from this release.

#1: Create Request Items from Compare Results

I’m already using this feature at one of my current clients. I’ve always touted the “Compare” feature of EDM. It’s been one of EDM’s best features from the beginning and provides the ability to compare Missing Nodes, Relationship Differences, and Property Differences. The biggest gap?  Previously, when the compare results are returned, there was no way to download or utilize the results in a request to resolve the differences. While you could always fix differences manually with drag-n-drop or direct property updates, when your compare results return dozens or hundreds of differences, that is not a reasonable option.

Well fear no more!  You can now create a request load file directly from the compare results. And it works as easily as you might have hoped it would:

  1. Run a Compare between two viewpoints.
  2. When the differences are returned, click “New Request.”
  3. Notice that new icon? Click that bad boy and a request load file will be automatically generated.

Kevin Black - EDM V19.07 - 7-24-19 - Image 1

4.  Now you can see that the request file has been attached to the request and the request items are added to your “shopping cart.” Make any additional changes, submit your request, and your viewpoints are synchronized! Easy peasy.

Kevin Black - EDM V19.07 - 7-24-19 - Image 2

NOTE

What is interesting is to analyze the request file that is generated from the compare result. You will notice it contains UPDATE and PROP_UPDATE actions. Why those? Well, PROP_UPDATE is used for property differences returned by the compare. And UPDATE is used for missing node and relationship differences. The UPDATE command is quite sneaky and powerful. Not only will it update node properties, but, depending on if the node exists and if the hierarchy set allows Shared Nodes, it will also UPDATE and perform an ADD, INSERT, or MOVE, too. Pretty cool.

#2: Property Editing

This enhancement not only provides property editing capabilities not available previously, but these edits can also be performed directly on the property without going through the App Registration wizard.

From the Properties card, Inspect the property you wish to edit. You’ll notice the Edit button is now enabled. From here, you can modify property default values, make the property editable or read-only, and modify the “Allowed Values” list if applicable.

No more stepping through the App Registration wizard to apply a simple property update!

Kevin Black - EDM V19.07 - 7-24-19 - Image 3

#3: Import/Export of Allowed Values

I’m so happy this feature is now available. My fingers, and my keyboard, thank you, Oracle!

From the same property editing Inspector dialog mentioned above, you can now modify the “Allowed Values” in a pick-list property without stepping through the “App Registration” wizard. Click the “+” sign to add new values manually. Not shown, but also available in the Actions menu, is the ability to delete or reorder existing list values.

But notice there is now an Import/Export capability. This utilizes a basic Excel file for mass upload/download of list values.

This will be a lifesaver for me at another project where I have Smart List and Attribute properties to build in EDM that contain dozens of list values.

Kevin Black - EDM V19.07 - 7-24-19 - Image 4

Below is a screenshot of the Excel file used to populate this property:

Kevin Black - EDM V19.07 - 7-24-19 - Image 5

Before I close, I’m going to cheat a bit and throw a shout out to one more enhancement in version 19.07…

Honorable Mention: Inspector Dialog Sizing

While custom resizing of the inspector dialog isn’t possible yet, the larger Inspector dialog size certainly makes it easier to view your data chain objects and reduces the amount of scrolling required. This is useful, especially when you’re viewing or reordering a bunch of properties in a viewpoint or node type!

That’s it for now. Be sure to check out EDM version 19.07 if you haven’t already. It contains additional helpful enhancements beyond the few I’ve highlighted here. Stay tuned for more upcoming EDM blog posts. If you need more information or have questions about this topic, email us at infosolutions@alithya.comSubscribe to receive notifications about new posts.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  Youtube

OPA! The Future of Cloud Integration – Important Updates Are Coming

Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM.  Why do they not like that I do this?  Well there is a master data management tool for Customer data that you can guess also uses the same acronym.  While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.

I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community.  We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.

Now any good Oracle roadmap update would not be complete without a safe harbor reminder.  What you read here is based on functionality that does not yet exist.  The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.

Ok, now that we have that out of the way, let’s get into the fun stuff.  There are a number of enhancements coming and planned, but today I am going to focus on two significant ones:  performance and ground to cloud integration.

Performance Enhancements

We’re all friends here, so we can be honest with each other.  CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does.  You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.

If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM.  Essbase and Planning became targets later.  Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS.  While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal.  This dynamic has plagued users of CDM for years.  It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.

To address performance concerns, Oracle is planning to support 3 workflow methods:

  • Full – No change from legacy process
  • Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export.  This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process).  The downside of this method is that drill through is not available.
  • Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg).  This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.

Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows.  To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.

If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement.  If you need assistance, feel free to reach out to us to connect with our team of data integration experts.

On-Premise Agent

Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud.  As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions.  While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM.  This is very much a push approach to data integration.

The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.

Opa! – a common Greek emotional expression. It is frequently used during celebrations.  Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!

This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM.  This approach is similar to how the FDMEE SAP adaptor currently works.  From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.

OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS).  A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request.  This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall.  If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.

The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle.  Additionally, OPA will support connecting to on-premises relational data sources.  Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.

OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process.  This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE.  This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.

The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting.  How you might ask?  In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud.  But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code.  This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.

The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment.  I am thrilled and can’t wait to get my hands on OPA for beta testing.  I’ll share my updates once I get through testing over the next couple of months.  I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public.  Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!

Key Features of EDMCS 18.10

Usually, when October rolls around each year, there are three things I am very excited about:

  1. The start of the NHL season
  2. Watching one of my favorite scary movies (Halloween ’78)
  3. Eating leftover Halloween candy for the next six months because we bought 200 pounds of it at Costco and only had 5 trick-or-treaters show up at our door

But this year, add #4 to the list: The October 18.10 release of Enterprise Data Management Cloud Service (EDMCS)! Trust me, this is a monster of a release (pun intended). This is the biggest release since 18.07 and here are some of the highlights:

  • New packaged adapter for Oracle Financials Cloud GL
  • Property Inheritance
  • Add Related Nodes Across Viewpoints
  • Download Viewpoint in Request Context
  • Request Load File Summary Statistics
  • Level Property
  • Metadata Deletion
  • Object Details Popup

Here are more details and insights on a few of these features:

Oracle Financials Cloud GL Adapter

This is a major adapter in terms of functionality. And like the packaged adapters for Planning and Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS), you can create an EDMCS application for Oracle Financials Cloud GL using a standard wizard interface, identify a connected application or use a file interface, and reap the benefits of several built-in validations.

The registration process includes steps to identify basic settings (e.g. Active Languages, Active Trees, Multiple Active Tree Versions, and Max Depth), register multiple Segments and Trees, and add/modify/remove Financial Categories. From reviewing the Oracle EDMCS Administrator Guide, I identified at least 16 built-in validations for EDMCS applications based on the Oracle Financials Cloud GL Adapter. This is certainly a packaged adapter with a lot of “meat on its bones.”

Property Inheritance

For those who love Oracle Data Relationship Management (DRM), you likely used the inheritance features of that product extensively – whether it was the global, inheriting property definition or using functions like ParentPropValue(). Well now you have property inheritance in EDMCS.

Property inheritance is set during the Application Registration process by modifying custom properties within your dimension node types. The choices are “None” or “Positional” which work as you would expect. Inherited property values can be overwritten in a request for exception cases. The other feature I like is that inheritance supports Shared members for relationship-level properties, allowing a shared member to have a different inherited property value than its primary member.

NOTE: property inheritance is only available for EDMCS Custom applications. It is not available for EDMCS applications based on the PBCS, EPBCS, or Financial Cloud GL adapter.

Request Load File Summary Statistics

This nice little feature gives you immediate statistics and feedback as you load a Request File into EDMCS. The number of rows processed, loaded, and skipped are identified, and you can open the request file attached to the in-flight request to see details on why rows were skipped. This all occurs before you submit the request and provides helpful, proactive input as to the changes that will be processed in your request.

Key Features of EDMCS Image 1

Metadata Deletion

With 18.10, you can now delete custom properties and custom applications. This makes me happy, as I’m not ashamed to admit that as I learned EDMCS, I made plenty of mistakes in creating applications or properties that I could not un-do. Oh the shame! Instead, I had to resort to archiving the application to partially hide my guilt. But now you can delete those unwanted custom properties and applications. Hopefully, future releases will allow intelligent deletion of other “mistakes” involving data chain objects like node sets, hierarchy sets, and node types that are archived and no longer used/needed.

Object Details Popup

Another minor, but helpful feature. Hover your mouse over a viewpoint name/label, and a popup window will display the application and dimension being used in the viewpoint. Helpful to keep your bearings when you start to use maintenance views that span multiple EDMCS applications and dimensions.

Key Features of EDMCS Image 2

Conclusion

As you can see, 18.10 is a significant release and arguably the single largest release in terms of new features since EDMCS was introduced in January 2018.

The biggest feature? Definitely the new adapter for Oracle Financials Cloud GL applications. I recommend reviewing the Oracle EDMCS Administration Guide to understand the full power that comes with this adapter. It is quite interesting.

The feature I will use right away? Property inheritance. With my current project involving multiple Custom EDMCS applications, property inheritance is a welcomed feature that will significantly ease the maintenance of key properties throughout my maintenance views.

I’d love to hear any insights and feedback from you as we continue this crazy journey called EDMCS. And stay tuned for future blogs discussing new EDMCS functionality and lessons learned from current projects!

Missed our earlier blogs on EDMCS, Cloud Data Management, and REST API? Be sure to check them out:

Using Subscriptions with EDMCS

As an earlier blog mentioned, the 18.07 release of Enterprise Data Management Cloud Service (EDMCS) delivered one eagerly anticipated piece of functionality: Subscriptions! And do not fear – these subscriptions are useful and do not involve a 1-year subscription to the Fruit of the Month Club (not that there’s anything wrong with that).

This blog post dives deeper into this new functionality, describes how it works, and highlights some lessons learned from utilizing Subscriptions with a current project involving multiple EDMCS Custom applications supporting multiple Profitability and Cost Management Cloud Service (PCMCS) applications.

Why Are Subscriptions Important?

Subscriptions are a huge step towards true “mastering” of enterprise data assets within a single master data Cloud platform. With EDMCS, it is important to build deployment-specific applications configured to the dimensionality requirements of the target applications to most effectively use the packaged adapters, validations, and integration capabilities. But in many cases, you also need to share common hierarchies across applications and avoid duplicative (that’s my big word for today) maintenance. After all, why have a master data management tool if you still must perform maintenance in multiple places? That’s just silly.

The answer to this dilemma is Subscriptions. By implementing Subscriptions, requests submitted to a primary viewpoint will automatically generate parallel subscription requests to subscribing viewpoints to automatically synchronize your hierarchy changes across EDMCS applications.

Note

This comment is important: “automatically generate parallel subscription requests.” EDMCS will not update a target, or subscribing, viewpoint behind the scenes with no visibility or audit trail to what has occurred. A parallel Subscription request will be generated along with the Interactive request that will be visible in the Requests window, along with the full audit trail and details that you find in an Interactive request. Even better, the Subscription request will generate an email and attach a Request File of the changes.

Nerdy Details

Views and Viewpoints

The first thing to really think about is the View and Viewpoint design of your EDMCS applications. Subscriptions are defined at the Viewpoint level, so you need to identify the source and target viewpoint for your business situation. With my current project, I have multiple EDMCS applications supporting multiple PCMCS applications. While the dimensionality is similar across the applications, the hierarchies vary, especially with the alternate hierarchies. So, it has been important to isolate the “common” or shared structures that should be synchronized across applications into their own viewpoint so that a subscription mechanism can be created.

Node Type Converters

You will likely need to create a node type converter. If the source and target viewpoints do not share a common node type, you must create a node type converter for subscriptions to work. In my situation, I had already created node type converters since I wanted to compare common structures across EDMCS applications, so the foundation was there to readily implement subscriptions.

Permissions

To create a Subscription, the creator must have (at a minimum) View Browser permission to the source view, View Owner permission to the target view, and Data Manager permission to the target application.

The Subscription assignee (this is the user who will “submit” the subscription request) must have (at a minimum) View Browser permission to the source view and Data Manager permission to the target application.

Creating a Subscription

Once the foundation is in place in terms of viewpoints, node type converters, and permissions, the actual creation of a subscription is easy.

Inspect the target viewpoint (the viewpoint that is to receive the changes from a source viewpoint via subscription), navigate to the Subscriptions tab, and click Edit. From there you can select the source viewpoint, the request assignee, and enable Auto Submit if needed. Save the subscription and you are all set.

  • Currently, there is no capability to edit an existing subscription. You must delete and add a new subscription to effect a change.
  • Any validation errors for your subscription will appear on this dialog as well. These are documented nicely in the Oracle EDMCS administration guide.

Using Subscriptions with EDMCS Image 1

Auto-Submit and Email Notifications

Emails will be generated and sent to the Request Assignee, whether Auto-Submit is enabled or not. The email will include details such as the original request #, the subscription request #, and how many request items were processed or skipped.

Using Subscriptions with EDMCS Image 2

Note

  • Remember, the subscription request will have a Request File attached to it. View the request file attachment to see details on why specific request items were skipped.
  • The request file is not attached to the email itself, only to the request in EDMCS.

Lessons Learned

Like I mentioned earlier, the foundation is important to making subscriptions work. And it all boils down to design and ensuring the building blocks of that foundation are in place:

Design, Design, Design!

  • The importance of dimension, view, and viewpoint design cannot be overstated. For each dimension, evaluate the primary and alternate hierarchy content and identify what will be shared across dimensions or applications and what will be unique to each dimension and application.
  • Based on that analysis, carefully design your viewpoints to enable subscriptions across EDMCS applications for hierarchies that truly need “mastering.”
  • As early as possible, identify the EDMCS user population along with permission levels for applications and views. This is important to identify the appropriate “Request Assignee” for your Subscriptions. I recommend creating a security matrix identifying each user and the permissions each will have.
  • Without a clear and well thought out design, you will find yourself constantly re-doing your views and viewpoints which, in turn, will cause constant rework of your subscriptions. The “measure twice, cut once” adage certainly applies here!
  • I am a big proponent of standard, consistent naming conventions to improve the usability and end user experience. The same holds true for Subscriptions. Consider using a standard naming convention for your viewpoints so it is clear which viewpoints have a subscription. It’s not obvious – unless you Inspect the viewpoint – that a subscription exists.
    • One approach I’ve been using is to name my source and target viewpoints identically with a special tag or symbol at the end of the target viewpoint name to indicate a subscription is present. I’m sure there are other and probably better ideas, but I find the visual cue to be helpful.
    • Perhaps in the future, Oracle will display subscription details when you hover over a viewpoint name (hint hint).

Node Type Converters

  • Ensure you have node converters in place
  • Make sure your node type converters are mapping all required properties.
    • I ran into an issue where updates to one property in my source viewpoint were not being applied to my target viewpoint via subscription requests, but all other property updates worked fine. The reason? I had recently modified my App Registration and added this property to a dimension’s node type. But my node type converter had already been created and wasn’t mapping or recognizing the new property. Once I updated my node type converter, the problem was solved.

Troubleshooting

  • The request files attached to subscription requests are a valuable troubleshooting tool. Status codes and error messages are included in these Excel files that are extremely helpful to determine why your request was not auto-submitted.
  • Inspect the Subscriptions on your viewpoints. Any validation issues will be displayed and are easily addressed. Typical Subscription validation errors include:
    • The request assignee no longer has the correct permission levels
    • The viewpoint no longer is active
    • A node type converter is missing

Conclusion

I have been looking forward to the subscription functionality in EDMCS and am pleased with it so far. Subscriptions are easy to configure, can be configured to auto-submit if desired, and generate emails to remind the requester a request has occurred and to act if the request was not submitted or request items were skipped. EDMCS Subscriptions are a big step forward to enabling true mastering of your enterprise data management assets!

PCMCS…Yeah, FDMEE Can Do That!

Oracle Profitability and Cost Management Cloud Service and Oracle Financial Data Quality Management Enterprise Edition Working Together Better

Over the last year, we have been fielding, positioning, and aligning more with Oracle’s new Cloud products. Some of the most common questions we are asked are:

  1. Has Edgewater Ranzal done that before?
  2. What “gotchas” have you encountered in your implementations and how have you addressed them?
  3. What unique offerings do you bring?

These are all smart questions to ask your implementation partner because the answers provide insight into their relevant experience.

Has Edgewater Ranzal done that before?

Edgewater Ranzal is an Oracle PCMCS thought leader and collaborates with Oracle as a Platinum partner to enhance PCMCS with continued development. To date, we’ve completed nearly 20 PCMCS (Cloud) implementations, and almost 80 Oracle Hyperion Profitability and Cost Management (HPCM – on premise) implementations spanning multiple continents, time zones, and industries. Our clients gladly provide references for us which is a testament to our success and abilities. Additionally, we frequently have repeat clients and team up with numerous clients to present at various conferences to share their successes.

As a thought leader in the industry and for PCMCS, we sponsor multiple initiatives that deliver implementation accelerators, test the latest product enhancements prior to their release, and work in tandem with Oracle to enhance the capabilities of PCMCS.

Our Product Management team is comprised of several individuals. Specifically for PCMCS, Alecs Mlynarzek is the Product Manager and has published the following blog: The Oracle Profitability and Cost Management Solution: An Introduction and Differentiators.  I am the Product Manager for Data Integration and FDMEE with several published blog posts related to FDMEE.

Now let’s explore some of the data integration challenges one might unexpectedly encounter and the intellectual property (IP) Ranzal offers to mitigate these and other data integration challenges that lurk.

What gotchas have you encountered in your implementations and how do you mitigate them?

We could go into great depth when detailing the PROs for using FDMEE with PCMCS…but it is much more beneficial to instead share some of the other less obvious discoveries made. Note that we work directly and continuously with Oracle to improve the product offering.

  • Extracting data via FDMEE data-sync is challenging. The size of the data cube and configuration settings of PCMCS has a threshold limit – 5,000,000 records and a 1GB file size – both of which are quite often reached. As a result, we have developed a custom solution for the data-sync routine.
  • Large datasets directly into PCMCS via DM (Cloud-based Data Management) can exhibit performance problems due to the server resources available in the Cloud. Functionality in on-premise FDMEE (scripting, Group-By, etc.) helps reduce the number of records going into the Cloud and therefore provides a performance gain.
  • Patching to the latest FDMEE patch set is crucial. Cloud applications (PCMCS, FCCS, E/PBCS) update monthly. As a result, we need to consistently check/monitor for FDMEE patches. These patches help ensure that canned integrations from Oracle are top-notch.

FDMEE_PCMCS Image 1

  • Executing two or more jobs concurrently via EPMAutomate is quite troublesome due to the workflows needed and how EPMAutomate is designed. As a result, we have invested considerable time into cURL and RESTful routines. We discovered that the login/logout commands are tied to the machine, not the user-process, so any logout from another executing run logs out all sessions.

FDMEE_PCMCS Image 2

  • The use of EPMAutomate is sometimes difficult. It requires a toolset on a PC – “JumpBox” – or on-premise EPM servers. It also requires the use of .BAT files or other scripted means. By using FDMEE, the natural ease of the GUI improves the end-user experience.
  • Loading data in parallel via FDMEE or DM can cause Essbase Load Rule contention due to how the automatic Essbase load rules are generated by the system. Oracle has made every effort to resolve this before the next Cloud release. Stay tuned… this may be resolved in the next maintenance cycle of PCMCS (18.10) and then the on-premise update of patch-set update 230.
  • We all know that folks (mainly consultants) are always looking to work around issues encountered and come up with creative ways to build/deliver new software solutions. But the real question that needs to be asked is: Should we? Since FDMEE has most of the solutions already packaged up, that would be the best tool for the job. The value that FDMEE can bring is scores better than any home-grown solution.

What unique offerings do you bring?

At Edgewater Ranzal, we have started to take some of our on-premise framework and adopt it for PCMCS. Some of the key benefits and highlights we provide are:

  • To combat the complications with loading data via FDMEE because of FDMEE’s inability to execute PCMCS clears out-of-the-box, we have added the functionality into the Ranzal IP catalog and can deploy this consistently for our clients. This is done via the RESTful functionality of PCMCS. Some of the items we have developed using REST are:
    • Import/export mappings
    • Execute data load rules or batch jobs from 3rd party schedulers
    • Refresh metadata in the Cloud
    • Augment EPMAutomate for enhanced flexibility
    • Execute business rules/clear POV commands as part of the FDMEE workflow
    • Execute stored procedures (PL/SQL) against DBaaS (see below)
    • Enhanced validation framework (see below)
  • We have redeveloped our Essbase Enhanced Validate to function with the PCMCS Cloud application. FDMEE on-premise can now validate all the mapped data prior to loading. This is great for making sure data is accurate before loading.

FDMEE_PCMCS Image 3

  • The Edgewater Ranzal tool-kip for FDMEE includes the ability to connect to other Cloud offerings for data movements, including DBaaS and OAC.

FDMEE_PCMCS Image 4

Can FDMEE do that…and should FDMEE do that?

Yes, you should use FDMEE to load to PCMCS, and it is an out-of-the-box functionality! As you can see, unlike DM whose feature comparison to FDMEE will be discussed in a later blog and white-paper, there are a lot of added benefits.  The current release of FDMEE v11.1.2.4.220 provides product functionality enhancements and has greater stability for integrations with most Cloud products.  Suffice it to say, having python scripting available and server-side processing for large files will greatly enhance your performance experience.

FDMEE_PCMCS Image 5

Contact us at info@ranzal.com with questions about this product or its capabilities.

Catching up with EDMCS

Last time, in the Wonderful World of Enterprise Data Management Cloud Service (EDMCS), we discussed initial impressions of this exciting new Oracle Cloud product and highlighted some early functionality enhancements.

But do you realize how much functionality has been added to EDMCS since its initial release in January 2018? The short list is impressive:

  • Enhanced node alignment/location in side-by-side viewpoint compares
  • Exposed REST API operations including dimension imports/exports and request creation/submission
  • Enhanced searching across members (name and descriptions) and data objects
  • Lifecycle management of data objects
  • Incremental imports
  • Viewpoint download from selected node

Furthermore, in the areas of REST API and metadata integrations, Tony Scalese, Vice President at Edgewater Ranzal and Oracle ACE, has written several blogs. These posts were written from the perspective of hands-on, real-world experience by working with one of three customers accepted into the Oracle EDMCS Early Adopter program. The blog posts include:

In this post, I’d like to highlight another feature that was recently added to EDMCS: enhanced request load files.

Enhanced Request Load Files

In the initial release, EDMCS provides a mechanism to perform bulk updates to EDMCS hierarchies – the Excel request load file. While the feature immediately had some advantages over its distant cousin in Data Relationship Manager (DRM) (action scripts), there were limitations. Primarily, EDMCS would only recognize the first tab or worksheet in an Excel file.

Well that has been fixed! Request load files can now contain multiple worksheets, and EDMCS will recognize all of them (provided the worksheet names match your viewpoint names of course). Additionally, EDMCS will automatically select all valid worksheets to load into EDMCS when loading a request file. This makes it very easy to download viewpoints to Excel and build a request file containing updates for multiple viewpoints to bulk upload at one time.

This also means you need to be careful! Since EDMCS auto-selects any matching worksheet name, if you were not paying attention, you could accidentally load outdated requests from a worksheet. But you can still delete any unwanted request items prior to submitting the request, if you catch them first.

Catching Up on EDMCS 1

While you could always load multiple request files in a single request since the initial release of EDMCS, this feature is a nice usability and productivity enhancement. It works great for situations such as adding a node to a primary hierarchy/viewpoint and inserting it into an alternate hierarchy/viewpoint, all from the same request.

Conclusion (and a teaser)

While EDMCS is the new kid on the block in the Oracle EPM cloud space, it’s exciting to see how it’s quickly closing the gap with new functionality being added regularly! REST API operations, enhanced request files, and the other enhancements mentioned above show how far EDMCS has come in just 6 months.

But wait, there’s more!

The 18.07 release of EDMCS looks to be a HUGE release chock full of new features, including one I am especially excited for: subscriptions!

Look for more blog posts coming soon to discuss the subscription functionality and utilizing EDMCS for a Profitability and Cost Management Cloud Service (PCMCS) implementation.

An Exploration of the EDMCS REST API

Recently my team and I had the opportunity to implement Oracle’s newest offering – Enterprise Data Management Cloud Service (EDMCS). EDMCS for those of you who are not familiar provides a cloud-based solution for managing master data (also referred to as metadata) across the organization.  Some like to refer to EDMCS as Data Relationship Manager (DRM) in the Cloud, but the truth is, EDMCS is not DRM in the Cloud.

EDMCS is a completely new vision of what master data management can and should be. The architect of this new cloud offering is the same person who founded Razza Solutions which was the company that developed the product now known as DRM.  That is important to know because it ensures that the best of what DRM has to offer is brought forward.  But, more importantly, it ensures that the learnings and wish list of capabilities that DRM should have are in the forefront of the developers’ minds.

Ok, now let’s get back to fun stuff. In the 18.05 patch for EDMCS, the REST API (v1) was exposed for public usage.  The documentation for the REST API can be found here:

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/rest-endpoints.html

As I highlighted in the previous post Troubleshooting Cloud Data Management Metadata Load Errors, I had developed an automation routine to upload EDMCS extracts to both PBCS and FCCS using FDMEE and Cloud Data Management.  We had been eagerly awaiting the REST API for EDMCS to finalize this automation routine and provide a true end-to-end process that can be scheduled or initialized via a single action.

Let’s take a quick look back at the automation routine developed for this customer. After the metadata has been exported to a flat file from EDMCS, the automation would upload a copy to the PBCS and FCCS pods, launch Cloud Data Management data load rules which would process the EDMCS metadata extracts, run a restructure of the database after all dimensions had been loaded, and then send a status email alerting the administrator of the result.  While elegant, I considered this to be incomplete.

Automation, in my view, is a process that can be executed without user interaction. While an automation routine certainly has parameters that must be generally maintained, once those parameters are set/updated, the automation cycle should not be dependent on user input or action.  In the aforementioned solution, we were beholden to the fact that EDMCS exports had to be run interactively; however, with the introduction of the publicly exposed REST API in the 18.05 EDMCS patch, we are now able to automate the extract of metadata from EDMCS.  That means we can finally complete our fully automated, end-to-end solution for loading metadata.  Let’s review the EDMCS REST API and how we did it.

The REST API for EDMCS is structured similar to other Oracle EPM REST APIs. By this, I mean that multiple REST commands may need to be executed to achieve a functional result.  For example, when executing a Cloud Data Management data load rule via the Data Management REST API, the actual execution of the data load rule is handled by a POST call to the jobs function with the required payload (e.g. DLR name, start period, etc.).  This call is just one portion of a functional requirement.  To achieve an actual data load, a file may need to be uploaded to the cloud, the data load rule initialized, and then the status of the data load rule be retrieved.  To achieve this functional result, three unique REST API executions would need to occur.

To export metadata from EDMCS to a flat file using the REST API, the following needs to be executed:

  1. Get the dimension information for the EDMCS application from which metadata will be exported
  2. Execute an export of the dimension(s)
  3. Determine the status of export
  4. Download the export to a flat file

Let’s explore each of these in a little more detail. First, we need to get the dimension IDs for the application from which we will be downloading metadata.  This is accessed from the applications function.

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-applications-get.html

When executing this function, the JSON object return includes all applications that exist in EDMCS (including those archived). So the JSON needs to be iterated to find the record that relates to the application from which metadata needs to be exported.  In this case, the name of the application is unique and can be used to locate the appropriate record.  Next, we need to query the JSON object to get the actual dimension id (circled in red).  The dimension ID is used in subsequent calls to actually export the dimension.

Great, now we have the dimension ID. Next, we need to execute the REST API call to export the dimension.

Automated Metadata 1.docx

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-dimensions-dimensionid-export-download-post.html

You will notice that when you access this POST method, the dimension ID from the previous step is required:

/epm/rest/v1/dimensions/{dimensionId}/export/download

The JSON object returned from this execution contains minimal information. It simply provides the URL to the next required REST API execution which will provide the status of the execution.

Automated Metadata 2.docx

With this information, we can check the status of the export using the jobRuns function

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-jobruns-jobrunid-get.html

The JSON object returned here provides us the status of the export invoked in the prior step (in yellow) as well as a URL to the actual file to download which is our last step in the process.

Automated Metadata 3.docx

Once the export job is complete, the files can be streamed using the URL provided by the REST execution in the prior step.

https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/edmra/op-v1-files-temp-fileid-get.html

And there you have it, a fully automated solution to download metadata to flat files from EDMCS. Those files are then provided to the existing automation routine and our end-to-end process is truly complete.

And for my next trick…let’s explore some of the different REST API tools that are available to help you in your journey with the EPM REST APIs.

 

Troubleshooting Cloud Data Management Metadata Load Errors

In my last post, I highlighted a solution that was recently deployed for a customer that leveraged Enterprise Data Management Cloud Service (EDMCS), Financial Data Quality Management Enterprise Edition (FDMEE), and Cloud Data Management (CDM) to create an automated metadata integration process for both Planning and Budgeting Cloud Service (PBCS) and Financial Close and Consolidation Cloud Service (FCCS). In this post, I want to take a bit of a deeper dive into the technical build and share some important learnings.

Cloud Data Management introduced the ability to load metadata from a flat file to the Oracle EPM Cloud Services in the 17.11 patch. This functionality provides customers the ability to leverage a common platform for loading both data and metadata within the Cloud.  Equally important, CDM allows metadata to be transformed using its familiar mapping functionality.

As noted, this customer deployed both PBCS and FCCS. Within the PBCS application, four plan types are active while FCCS has the default two plan types.  A design decision was made for EDMCS to create a single custom application type that would store the metadata for both cloud applications.  This decision was not reached without significant thought as well as counsel with Oracle development.  While the pros and cons of the decision are outside the scope of this post, the choice to use a custom application registration in EDMCS ensured that metadata was input a single time but still fed to both cloud applications.  As a result of the EDMCS design decision, a single metadata file (per dimension) was supplied with properties necessary to support each plan type.

CDM leverages its 23 “dimensions” to store metadata information for processing. Exactly like data, metadata is imported using an import format into the CDM relational repository.  Each field from a metadata file is aligned to a CDM dimension field.  The CDM Account dimension always represents the target application member name and the CDM Entity dimension represents the parent of the member.  All other fields can be aligned to any of the remaining 21 dimensions.  CDM Attribute dimensions can be utilized in the import and mapping process but are not available for exporting to the cloud application.  This becomes an important constraint especially in a multi-plan type deployment.  These 21 fields can be used to store any of the properties required to successfully load metadata to the target plan type.

Let’s consider this case study for a moment. The PBCS application has four plan types.  If a process were to be built to load all plan types from a single CDM data load rule, then we would not be able to have more than five plan type specific attributes or properties because we would not have enough CDM fields/dimensions to store the relevant information.  This leads to an important design approach.  Instead of a single CDM data load rule to load all plan types, a data load rule was created for each plan type.  This greatly increased the number of metadata properties and attributes that could be loaded by CDM and ensured that future growth could be accommodated without a redesign of the integration process.

It is important to understand that CDM utilizes the Planning Outline Load Utility (OLU) to actually perform the metadata load to the cloud application. The OLU loads metadata using merge (yes Planning experts, I realize that I am not discovering fire) which is important to understand especially when processing multiple metadata loads for a single application.  When loading metadata, there are certain properties that are Application level.  I like to think of these as being global.  Additionally, there are plan type specific attributes that can align (or not align) to the application level value/setting.  I like to think of these as local.

Why is this important? Well when loading a metadata file, if certain global properties are excluded from the metadata load file, the local properties (if specified) are utilized to default the global properties. Since metadata is loading using merge, this only becomes problematic when a new member is being added to the outline.

In this particular example, an alternate hierarchy with shared members was specified in one of the plan types. The storage property of the alternates was obviously set as Shared; however, when attempting the metadata load, the following error was encountered:

A Base Member cannot be changed to a Shared Member.

After much investigation (details to follow), I discovered that the global property should also be included in the metadata load.

While CDM utilizes the OLU to load metadata, it does not provide as much verbosity in the error information as the PBCS web interface (which also uses OLU) when loading metadata. Below is an example of the error in the CDM process log.  As a tangent, I’d love to check the logs without needing to open a Service Request.  Maybe Oracle will build an enhancement that allows that in the future (hint, hint, wink, wink to my friends at Oracle).

Baha Mar - Error Handling 1

So where do I go from here? Well, what do we know about CDM loading metadata to the cloud application?  We know that CDM uses the OLU to load a flat file generated by CDM.  Bingo!  The metadata file output by CDM is a good starting point.  That file is in the Outbox of the CDM application and can be downloaded in several different ways – CDM Import process (get creative folks), CDM process details, or EPM Automate.  Now we have the metadata file and can test to determine if the error is caused by CDM or the metadata itself.  It’s all about ruling out variables.  So, we take the metadata file and import it manually within the PBCS web interface and are able to replicate the error.  But now we have an important new data point – the line number from the metadata file that is causing the error.

Baha Mar - Error Handling 2

Now that we have actionable information, we can review each property and start isolating and eliminating different variables. We determined that this error was only occurring for new alternate hierarchy parents being added to the outline.  As a test, we added the global storage property and voila, the metadata load completed successfully.  Face palm!

Maybe this would have been obvious to folks with a lot of Planning experience, but there are plenty of folks learning the intricacies of Planning and Essbase (including our friends converting from HFM to FCCS), so I wanted to share a lesson learned in my journey of metadata integration using CDM.

CDM functionality for metadata represents two of the three primary operations of ETL. In my next post, we’ll dive deeper into how the extract component of ETL was accomplished to provide a seamless end- to-end ETL solution for metadata.

Laser Tag for Cloud Analytics

A friendly game of laser tag between out-of-shape technology consultants became a small gold mine of analytics simply by combining the power of Essbase and the built-in data visualization features of Oracle Analytics Cloud (OAC)! As a “team building activity,” a group of Edgewater Ranzal consultants recently decided to play a thrilling children’s game of laser tag one evening.  At the finale of the four-game match, we were each handed a score card with individual match results and other details such as who we hit, who hit us, where we got hit, and hit percentage based on shots taken.  Winners gained immediate bragging rights, but for the losers, it served as proof that age really isn’t just a number (my lungs, my poor collapsing lungs).  BUT…we quickly decided that it would be fun to import this data into OAC to gain further insight about what just happened.

Analyzing Results in Essbase

Using Smart View, a comprehensive tool for accessing and integrating EPM and BI content from Microsoft Office products, we sent the data straight to Essbase (included in the OAC platform) from Excel, where we could then apply the power of Essbase to slice the data by dimensions and add calculated metrics. The dimensions selected were:

  • Metrics (e.g. score, hit %)
  • Game (e.g.Game 1, Game 2, Total),
  • Player
  • Player Hit
  • Target (e.g. front, back, shoulder)
  • Bonus (e.g. double points, rapid fire)

With Essbase’s rollup capability, dimensions can be sliced by any one item or at a “Total” level. For example, the Player dimension’s structure looks like this:

  • Players
    • Red Team
      • Red Team Player 1
      • Red Team Player 2
    • Blue Team
      • Blue Team Player 1
      • Blue Team Player 2

This provides instant score results by player, by “Total” team, or by everybody. Combined with another dimension like Player Hit, it’s easy to examine details like number of times an individual player hit another player or another team in total. You can drill in to Red Team Player 1 shot Blue Team or Red Team Player 1 shot Blue Team Player 1 to see how many times a player shot an individual player. A simple Smart View retrieval along the Player dimension shows scores by player and team, but the data is a little raw. On a simple data set such as this, it’s easy to pick out details, but with OAC, there is another way!

Laser Tag 1

Even More Insight with Oracle Analytics Cloud (OAC)

Using the data visualization features of OAC, it’s easy to build queries against the OAC Essbase cube to gain interesting insight into this friendly folly and, more importantly, answer the questions everybody had: what was the rate of friendly fire and who shot who? Building an initial pivot chart by simply dragging and dropping Essbase dimensions onto the canvas including the game number, player, score, and coloring by our Essbase metric “Bad Hits” (a calculated metric built in Essbase to show when a player hit a teammate), we discovered who had poor aim…

Laser Tag 2

Dan from the Blue team immediately stands out as does Kevin and Wayne from the Red team!  This points us in the right direction, but we can easily toggle to another visualization that might offer even more insight into what went on. Using a couple of sunburst type data visualizations, we can quickly tie who was shooting and who was getting hit – filtered by the same team and then weight by the score (and also color code it by team color).

Laser Tag 3

It appears that Wayne and Kevin from the Red Team are pretty good at hitting teammates, but it is also now easy to conclude that Wayne really has it out for Kevin while Kevin is an equal opportunity shoot-you-in-the-back kind of teammate!

Reimagining the data as a scatter plot gives us a better look at the value of a player in relation to friendly fire. By dragging the “Score” Essbase metric into the size field of the chart, correlations are discovered between friendly fire and hits to the other team.  While Wayne might have had the highest number of friendly fire incidents, he also had the second highest score for the Red team.  The data shows visually that Kevin had quite a few friendly fire incidents, but he didn’t score as much (it also shows results that allow one to infer that Seema was probably hiding in a corner throughout the entire game, but that’s a different blog post).

Laser Tag 4

What Can You Imagine with the Data Driving Your Business?

By combining the power of Essbase with the drag-and-drop analytic capabilities of Oracle Analytics Cloud, discovering trends and gaining insight is very easy and intuitive. Even in a simple and fun game of laser tag, results and trends are found that aren’t immediately obvious in Excel alone.  Imagine what it can do with the data that is driving your business!

With Oracle giving credits for a 30-day trial, getting started today with OAC is easy. Contact us for help!