Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM. Why do they not like that I do this? Well there is a master data management tool for Customer data that you can guess also uses the same acronym. While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.
I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community. We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.
Now any good Oracle roadmap update would not be complete without a safe harbor reminder. What you read here is based on functionality that does not yet exist. The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.
Ok, now that we have that out of the way, let’s get into the fun stuff. There are a number of enhancements coming and planned, but today I am going to focus on two significant ones: performance and ground to cloud integration.
We’re all friends here, so we can be honest with each other. CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does. You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.
If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM. Essbase and Planning became targets later. Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS. While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal. This dynamic has plagued users of CDM for years. It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.
To address performance concerns, Oracle is planning to support 3 workflow methods:
- Full – No change from legacy process
- Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export. This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process). The downside of this method is that drill through is not available.
- Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg). This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.
Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows. To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.
If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement. If you need assistance, feel free to reach out to us to connect with our team of data integration experts.
Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud. As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions. While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM. This is very much a push approach to data integration.
The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.
Opa! – a common Greek emotional expression. It is frequently used during celebrations. Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!
This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM. This approach is similar to how the FDMEE SAP adaptor currently works. From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.
OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS). A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request. This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall. If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.
The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle. Additionally, OPA will support connecting to on-premises relational data sources. Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.
OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process. This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE. This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.
The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting. How you might ask? In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud. But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code. This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.
The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment. I am thrilled and can’t wait to get my hands on OPA for beta testing. I’ll share my updates once I get through testing over the next couple of months. I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public. Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!