Alithya Leverages the Power of Oracle Hyperion FDMEE

One of the biggest challenges of every organization nowadays is to provide reliable data for a clear business outlook. This essential activity is more critical than ever now that the solutions for hosting data are increasingly varied, with multiple scenarios involving on-site hosting, Cloud, and hybrid solutions. However, there are solutions that allow companies to efficiently and seamlessly navigate amongst the different hosting solutions. Alithya Group (NASDAQ: ALYA, TSX: ALYA) (“Alithya”) is well positioned to advise its clients on this topic.

Efficient management of data requires solid know-how.

As companies attempt to develop long-term guidance in this area, Alithya ensures that its clients’ data hosted in different environments continue to be used effectively. Alithya’s Data Governance and Integration practice includes specialists in Data Integration to help free up resources leveraging FDMEE for data validation and to maximize FDMEE with its offering for financial data application review.

Alithya’s Tony Scalese published a book providing deeper understanding of FDMEE.

Banking on the numerous mandates Alithya Group has been entrusted by its clients as a market-leading provider of Oracle Enterprise Performance Management Platform solutions, the company leverages the power of Oracle Hyperion Financial Data Quality Management, Enterprise Edition (FDMEE) to help organizations enhance the quality of internal controls and reporting processes. The extensive Alithya team specializing in these FDMEE solutions has among its ranks a widely recognized expert in the market, Tony Scalese, VP of Technology at Alithya and Oracle ACE, who published The Definitive Guide to Oracle FDMEE [Second Edition], in May 2019.

Connecting current on-premise and future Cloud solutions.

“As thought leaders, we are committed to providing essential resources to help clients enhance the quality of internal controls and reporting processes,” stated Chris Churchill, Senior Vice President at Alithya. “Our Data Governance and Integration practice aligns offerings with best practices and includes a team of dedicated experts as well as some of the most comprehensive resources in the industry.”

Sharing real-world FDMEE deployment strategies.

It is the great interest of Tony Scalese for the integration of data and the sharing of his great knowledge with a maximum of interested parties that led him to publish books on Oracle FDMEE. After a first edition that was very successful in 2016, he just launched the second edition of The Definitive Guide to Oracle FDMEE. Since many organizations are now considering or have begun migrating to the Cloud, the book provides a deeper understanding of FDMEE by informing readers about such topics as batch automation, Cloud & hybrid integration, and data synchronization between EPM products.

“FDMEE can integrate not only with on-premise applications, but also Oracle EPM Software as a Service (SaaS) Cloud Service offering,” says Tony. “It provides the foundation for Cloud Data Management and Integrations which are embedded in each of the EPM Cloud Services.  A deep understanding of FDMEE ensures that integrations built on-premise or in the Cloud function well and stand the test of time.”

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Traceability

Traceability is the buzz word in any regulated industry. Being able to prove the numbers is crucial to all businesses, but it can be very time consuming and complex for companies that operate across multiple and diverse lines of business with a large pool of Channels, Services, Customers or Products. Shared Services implementations require a clear understanding of the flow of costs.

Where is this cost coming from?

Why have I been charged so much more this month for the same service compared to last month ?

These questions should be easy to answer. Unfortunately, not all profitability analysis technologies are able to support a quick turnaround for providing the required level of detail.

PCMCS has more than one option to easily provide much-needed answers.

The Rule Balancing report is one of numerous out-of-the-box (OOTB) features included with an Oracle Cloud Service subscription able to support data traceability and transparency. For more details about the type of information the report provides and to learn the ease with which it can be set up for your application, review this comprehensive blog post.

Besides Rule Balancing reports, PCMCS OOTB features support transparency within allocations and/or profitability models with Traceability maps.

The focus of the current post is how to access, build, and use Traceability maps.

The order in which I am covering the PCMCS OOTB features is directly related to the Intelligence menu options available in PCMCS.  As a recap, the 6 menu options are listed below:

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views (How to create them, customize them and use them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis (Setup and configuration covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves (Usage and features covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

The contents of this blog are based on the standard Bikes (BkML30) demo application, so you can follow the step-by-step details without having to go through an app setup from scratch. You can load and deploy this application directly from your PCMCS Instance through a couple of clicks via the Application menu using the + / Create button.

Traceability – Intro

The traceability maps, whether in PCMCS or in on-premise HPCM, allow users to graphically visualize the allocation flow. A chosen business segment can be traced through the allocation steps, either backwards or forwards, starting from a predefined point. Images  make up the map of a data point either flowing into the selection of members chosen by an end user to troubleshoot or flowing out of that selection into subsequent allocation steps.

Alex Mlynarzek - Traceability - 5-21-19 - Image 1

Alex Mlynarzek - Traceability - 5-21-19 - Image 2

Traceability is a great tool for troubleshooting specific intersections of detailed data such as base level accounts against a specific department. However, when there is a need to identify patterns or troubleshoot allocation results at a higher level, the Standard Profitability (the first on-premise version of the Profitability module) Traceability maps are not geared to handle such requests. In order to perform a high-level analysis in Standard Profitability models, users would have to revert to Smart View or Financial Reports.

Being able to trace data at a summarized level of detail is the key difference between traceability in Management Ledger applications and traceability in Standard Profitability. Management Ledger allows end users to select the level within the hierarchy where they desire to launch or generate traceability, whether base level or otherwise.

Traceability – Setup

The starting point of any traceability map in Management Ledger is Model Views.  If you are interested in learning how to build and use Model Views, spend a few minutes reviewing this prior post.

List of steps necessary to launch a traceability report in Management Ledger applications:

  1. Select a valid Point Of View (POV). The POV must contain data in order to display any traceability results.
  2. Choose a prebuilt Model View – example: IT Support Activities.
  3. Select a tracing dimension which will represent the detail that is the focus of your analysis (Accounts, Departments, Entities, Business Units, Segments, etc). The selected tracing dimension determines the focus or scope of your analysis and will be the one dimension that is displayed at base level detail or any other generation within the hierarchy.
  4. Trace Forward and Use Generation Selection boxes are selected by default.  Not selecting “Trace Forward” allows users to perform a “Trace Backward” action; in other words, figure out how the model arrived at a data value for a selected intersection, rather than how a data value was allocated out from that intersection to other recipienAlex Mlynarzek - Traceability - 5-21-19 - Image 3

A report with the “Use Generation Selection” filter disabled will display the data at the base level for the Trace Dimension (in this example, Entity).

Note: If a message is received indicating the Flash Player version is not up-to-date, check that pop-ups are enabled on the page to allow the download of the required update.

Alex Mlynarzek - Traceability - 5-21-19 - Image 4

Alex Mlynarzek - Traceability - 5-21-19 - Image 5

If the traceability report does not generate any results, check that the allocation rules were successfully completed for the referenced POV. Alternatively, if the POV calculated is successful, but data is not displaying on the Trace Screen, check that the application variables are correctly setup for Current Year, Period, and Scenario. Also ensure the Account dimension maps are specified in the Dimension Settings screen.

Traceability – Display Options and Filters

Traceability screens have 5 display options:

  1. Vertical (Top Down)
  2. Horizontal (Left to Right)
  3. Tree
  4. Radial
  5. Circle

Within the traceability analysis, users can focus on a single rule. The tracing dimension in the previous example is Entity. The tracing dimension is the focus of the traceability reports – following how data was allocated into or out of a base level Entity.

Alex Mlynarzek - Traceability - 5-21-19 - Image 6

To isolate a specific rule and separate it in a standalone diagram, click Shift+Enter or select the graphical option on the top of the Rule ID box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 7

End users have the choice of displaying the aliases/descriptions of the Entities rather than the code member names. If aliases have not been uploaded in the metadata of the application, then the report will still reference the member name codes, regardless of this choice.

The following traceability report will display how operating expenses are reallocated /redistributed from each support entity (like IT, Facilities, IT, etc.) to the production entities using predefined driver configurations referenced in the Rule box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 8

Select the “Trace Forward” filter and keep constant all other prior selections in the initial traceability screen to display IT Support Activity charge out.

Alex Mlynarzek - Traceability - 5-21-19 - Image 9

The “forward tracing” of IT allocations represents how data is allocated out to consuming departments such as Finance, Marketing, Outside Sales, Assembly, etc.  Remember the focus of the trace screen depends on the “Tracing dimension” selected. In this example, Entity was the tracing dimension.

The top box, R0009, shows us the Rule Name relevant for IT allocations, the ruleset reference, the Driver used to allocate data to Targets – in this case : Desktop Laptop Users, regardless of Activity performed (NoActivity reference) as well as the amount / dollar value of the allocation : Allocation Out 1.338.000.

Users have the flexibility to allocate data partially (to allocate only a % of the total value instead of 100%). That is what the Contribution % reference in the R0009 box represents. In this rule, the administrator/rule designer decided to fully allocate the IT cost to the consuming department instead of allocating it partially. Therefore, the 100% reference is displayed.

In the case of the Bikes ML (Management Ledger) application, the Entity dimension has 4 generations. When talking about generations, the larger number, in this case number 4, represents the lowest level of detail. Generation 0 represents the Dimension name; Generation 1 represents the first set of children; Generation 2 represents the Children of Children, etc.

Below is a radial display of the contribution charge out at base Entity level when no generation selection was made prior to launching the traceability report:

Alex Mlynarzek - Traceability - 5-21-19 - Image 10

We can see in this diagram how much each Target Department was charged for their IT bill.  The contribution from the IT department to each target is displayed as a %.

Change the generation reference from 4 to 3. The higher the number of the generation, the more summarized the detail. The change of Generation reference will result in a summarization of the members of the Entity dimension to one level higher than seen previously.

Alex Mlynarzek - Traceability - 5-21-19 - Image 11

Notice how there is no longer an Entity breakdown at base level as we had in the previous screen when Generation 4 was selected, and the contribution percentages have been summarized to display the contribution % at a node level.

In situations where a dimension has many levels within the hierarchies or an increased volume of base level members, the generation selection proves useful as it allows users to group data sets and display them in the same diagram without compromising the level of detail.

Traceability – Customization

As mentioned at the beginning of this post, PCMCS comes with several features to support traceability and troubleshooting, one of these features being the Rule Balancing report. In situations where the traceability maps are insufficient to support a meaningful conversation regarding bill out values, and a deeper dive into an individual rule is necessary, the Rule Balancing report covers such a request.

While the traceability report has evolved in comparison to the Standard Profitability model, its usage is limited to situations where there is a need to troubleshoot specific data points while also having a visual representation as support.

The most common alternative to graphical traceability reports are ad hoc reports in Smart View, either built from scratch or launched via the Rule Balancing report (described in detail in a previous post).

Conclusion on OOTB features: Traceability

Business segment profitability analysis represents the analysis of operations and profitability of individual segments (e.g. Lines of Business, Products, Channels, Customers, Services) within a company. Business segment reporting requires all costs to be divided into one of the two categories:  direct /traceable costs or indirect/nontraceable costs.

In PCMCS, all costs are transparent and fully traceable. An indirect cost value can easily be traced throughout the flow of the allocation model all the way down to the business segment being analyzed. The indirect allocated volume can be explained through step-by-step analysis, high level traceability maps, and OOTB reports listing out the rules impacting the distribution of such cost.

Using a combination of Model Views, Rule Balancing reports combined with Traceability analysis and Smart View ad hoc retrievals, there should be no doubt regarding the source of a data value within PCMCS. Metric data validation – situations where the intersections for each metric are customized to such extent that building a Rule Balancing report or an individual Model View is not efficient nor effective – is mostly performed via Smart View.

In a nutshell, traceability provides significant benefits:

  • users can trace both revenue and cost based on predefined model views.
  • traceability can flow forward or backward from a starting point.
  • users can review the final contribution % (driver details are not displayed on this screen).
  • users can toggle between different display options and focus on specific rules for focused analysis.

Subscribe to our mailing list to receive updates for new blog posts related to PCMCS Queries, KPIs, Model Validation, System Reports, Data Integration using Cloud Data Management, as well as the OOTB Application Backup and Restore functionality.

Is there a PCMCS-related topic that you would like to see covered in more depth?  Email us at infoSolutions@alithya.com.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Profit Curves

Welcome back to this series of blog posts to cover out-of-the-box (OOTB) features of Profitability and Cost management Cloud Service (PCMCS). There is a need within the Oracle Cloud client community to discover what can be achieved with the tools provided when subscribing to one or more Oracle Cloud Services. A lack of awareness of the features included with your subscription is an unmeasured cost and a missed opportunity to gain much needed insight without further spend.

PCMCS applications – whether built for Fully Allocated P&L Solutions, Transfer Pricing, Shared Services Allocations or Customer/Product Profitability – have OOTB reporting capabilities available via the Intelligence menu that offer insight into allocation models with reduced effort. Here, we’ll explore how to set up, configure, and use such features and fully leverage the functionality that is included in the Oracle Cloud subscription cost.

The order in which I am covering the OOTB features is directly related to the Intelligence menu options available in PCMCS.  The 6 menu options are:

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views (learn how to create, customize, and use them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis (discover how to set up and configure them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves (this blog post focuses on Profit Curves)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

The content of this blog is based on the standard Bikes (BkML30) demo application, so you can follow the step-by-step information without having to go through an app setup from scratch. You can load and deploy this application directly from your PCMCS instance through a couple of clicks via the Application menu using the + / Create button.

 

Profit Curves – What Are They?

If you are looking for a graphical representation for the concentration of your profit by either Customer, Products, Channels, or Funds, look no further than the Profit Curves section in PCMCS. Profit Curves, also referred to as Whale Curves, are used to identify which cluster of Customers, Channels, or Products generate the most profit. Profit Curves display a graphical representation of the relationship between economic profit and the quantity of output sold.

The details of the profit or net income split by unit/service/customer displayed in a Profit Curve identify issues with:

  • expansion of a production line
  • breadth of services that may have a negative impact on profit
  • onerous clients consuming numerous resources without justifying the cost for the profit gained from their engagement
  • potential costing issues of “over” or “under” costing products (for example, overburdening a product or product line inappropriately);  a cost study should be performed to determine the appropriate allocation
  • pricing

Information illustrated with a Profit Curve can be enlightening and help to put the focus on specific customers, products, or channels where the greatest profit attention is needed, indicating situations where a few products, services, or clients create enough profit to maintain the rest of the company’s offering. Profit Curves are key to strategic decision making, especially when dealing with competing projects and limited resources.

During one of my recent PCMCS implementations, a Profit Curve proved valuable when the client’s staple product, advocated as being its best and most profitable, was discovered to be the least profitable after the implementation of an accurate cost allocation methodology in PCM!

The easy-to-follow Profit Curve provides the foundational insight needed to rapidly shift gears across product lines, ensuring alignment of management decisions backed up by real information.

 

Building a Profit Curve

There are several Profit Curves available in the Demo application BksML30. In order to build a Profit Curve, there must be a corresponding Analysis View that can be leveraged as the basis for data selection. See a step-by-step guide on how to build an Analysis View here.

Analysis Views can contain multiple references to Measures and/or Accounts; however, the Profit Curve using the Views analyzes and displays only one measure at a time.  Users can choose to define names for the X and Y axis to add clarity to the Profit Curve information consumers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 1.png

Here is an example of a Profit Curve:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 2

The curve displays a listing of Net Income generated by Customer.

From a Quarter-to-Date perspective (the Period selected at the top of the View), this Profit Curve indicates that all customers are profitable.  That may raise questions about whether or not the overhead is allocated appropriately or an even spread is used, thus skewing the results.

Note: Data in the BksML30 model at the time this Profit Curve was generated was calculated only for January, confirming the Profit Curve display, as the profit by customer distribution was evened out at Quarter-to-Date level.

The details of each customer/product/channel/segment and how much net income each is generating can be reviewed in the Category Analysis section. From a cost management and process improvement point of view, the right side is the most important.  This side generally represents customers/products/channels with a negative profit or that cost the company money.  While these customers/products/channels can’t always be eliminated, they can be watched and reviewed for pricing changes.

Using a PCMCS Profit Curve

There are options to filter data by the POV dimension, Period, or by metrics tied to Customers. For example, we can exclude from the analysis any Customers with Operating Expenses that are considered marginal. After defining the required filters, we can refresh the Profit Curve and review the newly generated pie charts.  Filters can be added to all available metrics and can be stacked up to generate any custom report.

Below is an example of the same “All Customers” Profit curve, limited to January and with a selection of all Customers who had a Net Income smaller than 1 positive unit (USD or the currency defined in the PCM model) thereby highlighting Customers creating losses.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 3

In the Details section of the Profit Curve, there is a count of 886 customers with a Net Income smaller than 1.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 4100% of the customers analyzed based on the specified criteria are unprofitable. The “Actual Profit” in this Details section can be translated into “Actual Loss” as the total accumulated value across the 886 customers is US$ -1,148,670.

If there are doubts regarding the data intersection for the remaining dimensions in the PCM model such as Product or Entity, we can analyze related information through the configuration icon located next to the “Add Filter” menu. These selections are predefined in the Analysis View that was used during the creation of the Profit Curve, and you will not be able to modify them unless you modify the underlying View.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 5

If questions are raised during the analysis on the Profit Curve screen and a list of details by Customer is requested, we have the option to launch a report from the “Analysis Links” menu under the Category section.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 6

A report in the following format will be generated to display the Customer detail records along with all the other settings defined in the Analysis View.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 7

This report can be exported in .xls format (“Export to Excel” option), and it represents a base level data dump report, in column format, containing multiple generations and references to attribute dimensions.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 8

Note: When launching this report, users must check that the parameters have transitioned correctly from the previous screen. The Period parameter, which is saved to be Quarter-to-Date on the original Analysis View used in the Profit Curve diagram, will override any other selection made during run time analysis. If there is a need to revert to a specific month before launching the Export to Excel, users will have to make this update on the Filter /POV area and perform a data Refresh.

We can make changes to the Analysis View to add further details (for example, Cost of Goods).

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 9

For the 886 customers that are not profitable, we can dive deeper into their Cost of Goods data, Operating Expenses, or analyze whether or not the products sold are so heavily discounted that they no longer generate a margin.

 

Pie Charts Related to PCM Profit Curves

 

We can further analyze the resulting Profit Curve data by using the available predefined categories tied to the Attribute dimensions available in the PCMCS application, in the underlying Analysis View displayed in the adjacent Pie Chart.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 10

The available categories to display the Pie Chart data for the Profit Curve chosen are the following:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 11

When selecting the Region category/attribute, we learn that the Southeast area contains 26,07% of all the unprofitable customers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 12

If we change the Focus of the Category to be on Top 10% most unprofitable customers by Amount vs. All Customers/Number of Customers, the following information is displayed:

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 13
Alex Mlynarzek - Profit Curves - 4-18-19 - Image 14

The Pie Chart reveals that the Southeast region has the highest number of unprofitable customers both by Number of Customers as well as by Total Amount/Loss.

When adding a filter based on Customer Generation 3 which distinguishes between Department Stores and Specialty Retailers, it looks like 87.64% of the Top 10% most unprofitable customers are from Department Stores.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 15

A look at the 4th generation in the Customer dimension where we can analyze the split of the losses at Customer level indicates that one store is responsible with 65.17% of all losses within the top 10% most unprofitable Customers.

Alex Mlynarzek - Profit Curves - 4-18-19 - Image 16The Pie Chart is the only artifact that is refreshed based on the selections of the Category Analysis menu while the Profit Curve remains constant based on the selections in the POV and filter criteria.

While all users of PCMCS can generate/launch Profit Curve reports and export their associated Analysis Views, in order to create and set up a Profit Curve report, the PCMCS administrator must update the requesting user’s permissions. As with all Intelligence screens within PCMCS, the Viewer role allows the use of these artifacts, not its creation or setup.

Concluding Thoughts About OOTB Features: Profit Curves

If you have been following the posts in this blog series, you’ve become aware of the dashboarding opportunities at your disposal with a PCM subscription. The listing of PCMCS OOTB features is a good starting point for comparing any other profitability and cost management tools on the market, regardless of vendor and technology employed.

Creating insightful dashboards is now at the tips of end users’ fingers, no longer involving complex requirements gathering processes and iterating between different display options. PCMCS users have the ability to build and customize their own dashboards. As a result, IT staff is no longer burdened with reporting requests or artifact migration between environments.

Subscribe to our mailing list for updates on the next blog post covering Traceability, Queries, and KPIs. Don’t think the PCMCS OOTB features blog series will stop at the Intelligence menu options! There is more to come on Model Validation, System Reports used for maintenance and troubleshooting, Integration with Cloud Data Management, and the Application Backup and Restore functionality. All this and more will be covered in future blog posts, so watch this space for updates.  If there is a PCMCS-related topic that you would like to see covered in more depth, email us at infosolutions@alithya.com.

EDMCS and Data Governance – Part 3

Welcome to Part 3 – the finale – of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release.

Part 2 discusses Workflow Stages in greater detail and dives into the brains of EDMCS workflows – the Approval Policy. Approval policies at different levels of the data chain are explained, and we conclude by building a sample workflow at the dimension level.

In Part 3, I’ll attempt to tie a bow around everything and offer some parting thoughts.

Recap

As I continue to explore and learn about collaborative workflows in EDMCS, these are the key points that come to mind:

  • Emphasize the Fundamentals – No matter what tool you are using, People and Process are extremely important in any data governance solution along with strong executive sponsorship and robust change management.
  • Build the Foundation – get the client comfortable with the tool and content before you introduce workflows. A strong foundation (your applications, dimensions, views, and viewpoints) is needed before you start the plumbing and wiring (workflows).
  • Brush up on Security – I haven’t discussed security extensively in this blog series, but the Oracle EDMCS User Guide does a nice job describing security requirements for assigning and approving workflow requests. Note that security enhancements have been introduced along with workflows. A new “Submitter” permission is now available to go along with Owner, Data Manager, and Browser. And permissions can be assigned at the Application, Dimension, Hierarchy Set, and Node Type levels.
  • Ponder the Approval Policy – this is the most interesting one to me. As we discussed in Part 2, approval policies can be defined at 4 points in the data chain (see Figure 1). With the inheritance and inter-dependencies of approval policies across the data chain along with the actions each policy can govern, it is critical to efficiently design your approval policies up front.

o   For example:

  • Suppose your client requires a final “audit” type of approval across the board for any type of request for any dimension. Or they always a require an upfront “gatekeeper” type of approval to make sure the request is justified and complete before it continues down the approval chain. These would be good candidates for an approval policy at the Application level. And it would avoid having to define duplicative approval policies at lower levels in the data chain.
  • Will your application contain dimensions that do not need data governance workflows? Then Application level approval policies should be avoided.
  • Say you want to limit and govern the actions of a specific group so it can only work with existing nodes (insert, remove, update). An approval policy at the Hierarchy Set level is probably best.

o   Overall, I believe approval policies at the dimension level are a good place to start. Then as the workflows evolve and requirements become more clear, you can determine if there are common factors across all dimension approval policies that can be consolidated at a higher level (Application level approval policy), or if there are specific subsets of actions that need to be broken out to a lower level (Node Type or Hierarchy Set level approval policy).

o   All of which brings up another interesting point: effective approval policy design directly ties into effective viewpoint design. Think about it – you can define the set of Allowed Actions (Add, Insert, Move, etc.) at a Viewpoint level. Which means what? Special-purpose maintenance views are likely required to support certain approval policies, especially those at the Node Type or Hierarchy Set levels.

Figure 1 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 3 - Image 1

How do EDMCS Workflows Compare with DRM/DRG?

I was reluctant to include this section at first because in general, I don’t like comparing Data Relationship Manager (DRM) and EDMCS. Yes, they are both master data management tools and yes, they do share some common concepts and terminology. But overall, the two products are so different in terms of philosophy, deployment design, and underlying architecture that I think comparing the products is often less than helpful.

However, with data governance and collaborative workflows, I feel there is enough commonality that it is worth highlighting a few items. So here goes:

Topic DRM/DRG EDMCS
Workflow Design
  • Based on workflow models and workflow tasks
  • Tasks linked to specific actions (Add Leaf, Add Limb, Insert, Move, etc.)
  • Based on Approval Policies
  • Approval policy level (Application, Dimension, Node Type, Hierarchy Type) determines context and scope of actions governed

 

Workflow Stages
  • Use a Submit stage, a Commit stage, and optionally, one or more Enrich and/or Approve stages
  • ·Use a Submit stage and (implied) Commit stage
  • Approval policies determine approval stages (sequential vs parallel, # of approvers)
  • Requests can be re-assigned for collaboration prior to Submit
User Interface (UI)
  • Form-based design
  • No forms
  • Requesters and approvers interact directly with the viewpoints
Approval Options
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
Escalations
  • Requests can be escalated based on defined intervals
  • Requests can be escalated based on defined intervals
Separation of Duties
  • Workflows can be configured to prevent a submitter from approving their own request
  • Workflows can be configured to prevent a submitter from approving their own request
Email Notifications
  • Generates email notifications
  • Generates email notifications
Other
  • Supports conditional workflows
  • Supports splitting of requests based on pre-defined criteria
  • Not yet supported

I’m curious if Oracle will introduce a form-based UI for workflows. Part of me would very much like to see that so that you can present a clean user interface to the approvers, hide unnecessary details, and display special instructions and messages, but part of me does not. One of my favorite features of EDMCS is the visual highlighting of pending request changes and the “shopping cart” of request items that are displayed prior to submitting a request. I would hate to lose that by going with a forms-based workflow UI, but perhaps there is a solution that combines the best of both worlds. 

Conclusion

Well that’s it, an initial look at workflows and approval policies in EDMCS. I’m excited to see how this functionality evolves and expands over time. Talk to you next time!

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

EDMCS and Data Governance – Part 2

Welcome to Part 2 of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Oracle Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release. This exciting feature addresses a major gap in EDMCS as the product continues to rapidly evolve and mature.

In Part 2, we dive into the details of how to configure workflows. This process revolves around the concept of an “approval policy.” Interestingly, approval policies can be configured at different points of the EDMCS data chain and cascade or inherit to affect downstream points of the data chain.

Workflow Stages

Before we dive into approval policies, let’s discuss EDMCS workflow stages a bit more. They are similar in concept to Data Relationship Governance (DRG) workflow stages. See Figure 1 for an overview:

Figure 1 – EDMCS Workflow StagesEDMCS and Data Governance – Part 2 - Image 1
  1. Submit (or Assign) Request – A request is initially created as you do today. But wait…there’s more! You can Submit the request to immediately move the request into the Approve stage OR you can Assign the request to colleagues to collaborate on the request together. When the request is ready, it is submitted to move to the Approve stage.
  2. Approve Request – The approver(s) have 3 choices:
    • Approve – the request is approved and moves forward (thanks Captain Obvious!).
    • Push Back – like DRG, the request is pushed back to the submitter for clarification or changes, who then updates and resubmits the request.
    • Reject – like DRG, the request is denied and closed. Think of “reject” as the RAID of the data governance world – it kills requests dead.
  3. Commit Request – once fully approved, the request is auto-committed and closed. EDMCS has now been updated.

Approval Policies

Now for approval policies. Approval policies can be configured at 4 levels:

  1. Application
  2. Dimension
  3. Node Type
  4. Hierarchy Set

It is important to note that each data chain object can contain one, and only one, approval policy. However, approval policies have a cascading impact so that multiple approval policies can work in concert to govern and control exactly what you want. Yes, you heard that right:  Approval Policy Inheritance – it’s not just for properties anymore!

The types of actions governed by an approval policy depend on the data chain object it is configured with – see figure 2 below:

Figure 2 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 2 - Image 2As you can see, policies defined at the Application or Dimension level govern all actions (add, delete, insert, remove, move, etc.) while policies defined at the Node Type or Hierarchy Set level govern a subset of actions. Why is this important? Because it means you need to carefully design what types of actions you want to govern and who will perform them. If I define an approval policy at the Hierarchy Set level and then submit a request that Adds 3 accounts, how many approvers are required for the request? A big ZERO! Since I requested “add” actions and only have an approval policy at the Hierarchy Set level, no applicable approval policy exists to govern the request.

Putting It All Together

Let’s walk through an example.

  1. Define Approval Policy

First, I will define an approval policy for the Account dimension. To do this, Inspect either the application or default viewpoint and access the Account dimension from the Definition tab. From there, click the Policies tab.

Here you will see the Approval policy for the Account dimension. Click on the Approval link to inspect the approval policy.

EDMCS and Data Governance – Part 2 - Image 3The General tab will display basic information about the approval policy. You can edit the approval policy name and description if necessary.

EDMCS and Data Governance – Part 2 - Image 4The Definition tab is where the magic happens. Select edit to update the following parameters:

  • Enabled – click this check box to enable the approval policy.
  • Approval Method – select Serial or Parallel.
  • One Approval Per Group – if using Serial approvals, this will automatically be set to “True.” If using Parallel approvals, you can select one approval per group or define a Total Required # of approvers.
  • Include Submitter – enable this to allow the submitter to also be an approver (the submitter’s approval will be automatically granted). If “separation of duties” is required for your company, do not enable this.
  • Reminder Notification – the # of days that will elapse before reminder emails are sent.
  • Approval Escalation – the # of times a reminder occurs before an escalation email will be sent.
  • Approval Groups – select user(s) and/or group(s) to be included in the approval process. When using Parallel approvals, the order of approval groups does not matter. When using Serial approvals, the order of approval groups does matter – you need to list the approval groups in the order that approvals should be executed.

With my example approval policy, I am using serial approvals, 2 approval groups (a Planning group and GL group), a reminder interval of 5 days, and an escalation interval of 2 reminders.

EDMCS and Data Governance – Part 2 - Image 5

  1. Submit Request

Now we’re cooking with gas. It’s time to submit a request. I will submit a request to my default Account viewpoint that includes 1 add, 1 property update, and 1 move. Here is the request in Draft status:

EDMCS and Data Governance – Part 2 - Image 6

Did you notice something new? Look at the Actions button next to Submit. This is where you can assign the request to another user and collaborate with him to finish up the request.

EDMCS and Data Governance – Part 2 - Image 7

EDMCS and Data Governance – Part 2 - Image 8

  1. Approve the Request

After the request is submitted, it is considered “in flight” because it has been submitted, but not yet approved/committed. And look! EDMCS now offers a nice Activity page on the home screen displaying the status of various workflow requests:

EDMCS and Data Governance – Part 2 - Image 9

First, the users in the Planning Approvers group will receive an email notifying them that they have been “invited to approve a request” (it’s very polite):

EDMCS and Data Governance – Part 2 - Image 10

As mentioned earlier, an approver has 3 choices: Approve, Reject, or Push Back. Reject and Push Back are available under the Actions dropdown. Here are the dialog windows that will be displayed for those actions (note the comment field is required):

EDMCS and Data Governance – Part 2 - Image 11

Otherwise, the approver will click the Approve button and see this:

EDMCS and Data Governance – Part 2 - Image 12

And then the same process will continue with the GL Approvers group since I am using Serial approvals. Once again, an approver can reject, push back, or approve. Once approved, the request is committed and closed.

Congratulations! You have now completed your very first data governance workflow request in EDMCS!

Conclusion

This blog post should be useful in providing more details and clarity on workflows, workflow stages, and approval policies. In the third and final post for this series, I’ll offer a recap and some closing thoughts. Talk to you then.

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 3

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

OPA! The Future of Cloud Integration – Important Updates Are Coming

Much to the chagrin of Product Management, I often abbreviate Cloud Data Management to CDM.  Why do they not like that I do this?  Well there is a master data management tool for Customer data that you can guess also uses the same acronym.  While I understand the potential confusion, since I’m telling you up front, there should be no confusion when I use CDM throughout this post.

I recently had the opportunity to meet with Oracle Product Management and Development for FDMEE/CDM to get a preview of what’s coming to the product and offer feedback for additional functionality that would benefit the user community.  We generally get together about once a year; however, it’s been a bit longer than that since our last meeting, so I was excited to hear what interesting things Oracle’s been working on and what we may see in the product in the future.

Now any good Oracle roadmap update would not be complete without a safe harbor reminder.  What you read here is based on functionality that does not yet exist.  The planned features described may or may not ever be available in the application – at the sole discretion of Oracle. No buying decisions should be made based on the information contained in this post.

Ok, now that we have that out of the way, let’s get into the fun stuff.  There are a number of enhancements coming and planned, but today I am going to focus on two significant ones:  performance and ground to cloud integration.

Performance Enhancements

We’re all friends here, so we can be honest with each other.  CDM (and FDMEE) isn’t an ETL tool in the truest sense of the word. It is not designed to handle the massive data volumes that more traditional ETL can and does.  You might think to yourself thanks for the info there Tony, but we all know that, and you wouldn’t be wrong, but I like to set the stage a bit.

If you know the history of FDMEE, you know that it was originally designed to integrate with Hyperion Enterprise and then HFM.  Essbase and Planning became targets later.  Integrating G/L data is far different than the more operational data that is often needed by targets like EPBCS and PCMCS.  While CDM (and FDMEE) can technically handle the volume of data with this more granular data, the performance of those integrations are sometimes less than optimal.  This dynamic has plagued users of CDM for years.  It has only been exacerbated when integrations are built that do not have a deep understanding of how to tune CDM (and FDMEE) processes to achieve the highest level of performance within the constructs of the application. As CDM has grown in popularity (owing to the growth of Oracle EPM Cloud), the problem of performance has become more visible.

To address performance concerns, Oracle is planning to support 3 workflow methods:

  • Full – No change from legacy process
  • Full, No Archive – Same workflow as today but data is deleted from the data table (tDataseg) after a successful export.  This means the data table will contain less rows and should allow new rows to be added faster (inserts during the workflow process).  The downside of this method is that drill through is not available.
  • Simple – Same workflow as today but data is never moved from the staging/processing table (tDataSeg_T) to the data storage table (tDataSeg).  This is the most expensive (in terms of time) action in the workflow process so eliminating it will certainly improve performance. The downside is that data can never be viewed in the Workbench and Drill Through is not available.

Oracle has begun testing and has seen performance improvements in the range of 50% in data sets as large as 2 million rows.  To achieve that metric required the full complement of the new features of Data Integrations (i.e., Expressions) to be utilized. That said, this opens up a world of possibility for how CDM can potentially be used.

If you have integrations that are currently less than optimal in terms of performance, continue monitoring for this enhancement.  If you need assistance, feel free to reach out to us to connect with our team of data integration experts.

On-Premise Agent

Ground to cloud integration is one of the most important capabilities to consider when implementing Oracle EPM Cloud.  As the Oracle EPM Cloud has evolved, so too has the complexity of the solutions deployed within it which has steadily increased the complexity of the integrations needed to support solutions.  While integration with on-premises has always been supported through EPM Automate, this requires a flat file to be generated by the system from which data will be sourced. The file is then loaded to the cloud and processed by CDM.  This is very much a push approach to data integration.

The ability of the cloud to pull data from on-premises systems simply did not exist. For integrations with this requirement, FDMEE (or some other application) was needed. Well as the old saying goes, the only thing constant is change.

Opa! – a common Greek emotional expression. It is frequently used during celebrations.  Well it’s time to celebrate because Oracle will soon (CY19) be introducing an on-premises agent (OPA) for CDM!

This agent will allow a workflow to be initiated from CDM, communicate back to the on-premises systems, initialize and then upload an extract to the cloud. The extract will be natively imported by CDM.  This approach is similar to how the FDMEE SAP adaptor currently works.  From an end user perspective, they click Import on the Data Load Workbench and after some time, data appears in the application. What’s happening in the background is that the adaptor is initializing an extract from SAP and writing the results to a flat file which is then imported by the application. OPA will function in an almost identical way.

OPA is a light weight JAVA utility that requires no additional software (other than JAVA) that will be installed on local systems. It will support both Microsoft and Linux operating systems. Like all Oracle on-premises utilities (e.g., EPM Automate), password encryption will be supported. The only port(s) which are required to be opened are 80 (HTTP) or 443 (HTTPS).  A customer can then use an externally facing web server to redirect to an internal port for the agent to receive the request.  This is true only if the customer wants to run the agent on a port other than 80 or 443 and do not want to open that port on their enterprise firewall.  If the customer wants to run the agent on port 80 or 443 and either of those ports are open, then no firewall action would be required.

The on-premises agent will have native support for Oracle EBS and PeopleSoft GL – meaning the queries are prebuilt by Oracle.  Additionally, OPA will support connecting to on-premises relational data sources.  Currently Oracle, SQL Server and MySQL drivers are bundled natively but additional drivers can be deployed as needed meaning systems such as Teradata will be able to be leveraged as data sources.

OPA will also provide an ability to execute scripts (currently planned for JAVA but discussions for Groovy and Jython are in flight) before and after the on-premises extract process.  This is similar to how the BefImport and AftImport event scripts are currently used in FDMEE.  This will allow the agent to perform pre and post processing such as running a stored procedure to populate a data view from which CDM will source data.

The pre and post events of OPA really open up a world of opportunity and lay the foundation for CDM to support scripting.  How you might ask?  In v1.0, OPA is intended to provide a mechanism to load on-premises data to the cloud.  But in theory, CDM could make a call to OPA at the normal workflow events (of FDMEE) and instead of waiting for a data file, simply wait for an on-premises script to return an execution code.  This construct would eliminate the security concerns that prevented scripting from being deployed in CDM as the scripts would execute locally instead of on the cloud.

The OPA framework is really a game changer and will greatly enhance the capability of CDM to provide Oracle EPM Cloud customers a true “all cloud” deployment.  I am thrilled and can’t wait to get my hands on OPA for beta testing.  I’ll share my updates once I get through testing over the next couple of months.  I’ll also be updating the white paper I authored back in December of 2017 once OPA is released to the general public.  Stay tuned folks and feel free to let out a little exclamation about these exciting coming enhancements…OPA!

EDMCS and Data Governance – Part 1

Ahh… February. An interesting month with a variety of happenings. From the significant – Black History Month and President’s Day, to the exciting – the Super Bowl…well sometimes. From the romantic -Valentine’s Day, to the silly – that tenacious ground hog trying to find his shadow…AGAIN. Not to mention that Spring is just around the corner and brings us the glorious event known as “March Madness!”

Why am I babbling about February? <segue> Because it is also the month that introduced Data Governance and Collaborative Workflows with the release of Enterprise Data Management Cloud Service (EDMCS) v19.02. <segue>

As we continue this journey to Enterprise Performance Management (EPM) Cloud, the addition of Data Governance to EDMCS is a major step forward, especially for those of us who have worked with the classic on-premise solutions (Data Relationship Management (DRM) and Data Relationship Governance (DRG)) and who have been awaiting a similar offering in EDMCS to support our Cloud clients. From what I’ve seen so far, a major gap between DRM/DRG and EDMCS has been addressed with this release.

In this blog series, I’d like to further explore Data Governance in EDMCS. At a high level, this is how I see this series unfolding:

  • Part 1 will provide the foundation, background, and basic concepts for EDMCS and Data Governance
  • Part 2 will get more into the “techy” stuff and dive deeper into Approval Policies and Security
  • Part 3 will provide a recap and closing thoughts/lessons learned

So, with that said, onto Part 1…

Prerequisites

Before diving head first into configuring Data Governance and collaborative workflows in EDMCS, there are a few things to consider.

  • Don’t forget people and process. I’m a big believer that people and process are just as (and usually much more) important as the tool. Please refer to this blog post for a quick read on this: The Data Governance Triple Crown.

I believe the same tenets apply to EDMCS and that it’s important to start thinking about a formal data governance program that includes a charter, executive sponsorship, roles & responsibilities, metrics, and much more. Data Governance can be a challenging cultural shift for many organizations which requires strong change management to handle the inevitable resistance. This is where a formal data governance framework can help.

  • Establish the foundation. As with building a house, it’s important to lay a solid foundation before you install the wiring and plumbing. Build your EDMCS application(s) and dimensions, and populate your primary and alternate hierarchies first. Get the client comfortable with the tool and the content. Then you can start to layer in the workflows.
  • Start to identify the “who” (e.g. the people involved and the roles they will play: who will be submitting requests? Who will be approving? Who will do both?
  • Start to think about the “what.” What applications/dimensions/hierarchies will be governed? What are the use cases and typical scenarios that require data governance? Start to collaboratively mock up and storyboard some typical workflows with the client to visualize how the workflows will function. And don’t try to build a workflow for every possible scenario. Start with the big hitters and low hanging fruit first. You can always add more workflows later.

What’s Included in EDMCS Workflows?

Are you wondering what EDMCS includes as far as data governance functionality? In summary, EDMCS supports:

  • Two types of roles – submitters and approvers
  • Separation of duties – workflows can be configured to prevent submitters from approving their own requests
  • The “four eyes” principle: EDMCS data governance adheres to the principle that requests must be approved by at least two people
  • Default application views and maintenance views: workflows can work with both types of views
  • Subscriptions: workflows can be triggered by Subscription requests
  • Email-based notifications
  • Serial and Parallel approvals:
    • Serial approval means a sequential order of approvals is required. For example, Approver #2 can’t approve until Approver #1 approves, Approver #3 can’t approve until Approver #2 approves, and so on.
    • Parallel approval means the approvals can occur in any order and at the same time.
    • With either method, all approvals must occur before the request is committed.
  • Configuration of Reminder and Escalation intervals
  • Multiple Workflow Stages:
    • Submit – initiate the request and add/edit/delete line items in the request. Note that with the 19.02 release, you can also attach documents and insert comments at the line item level. These enhancements are helpful to attach policies, supporting details, and other documentation related to the workflow request.
    • Approve – similar to DRG, an approver can approve, push back, or reject a request. Pushing back will send the request back to the submitter for additional changes. Rejecting will close the request and end the workflow.
    • Commit (implied) – once the request is fully approved, it is committed, hierarchies are updated, and the request history can be viewed like any other request.
  • Approval Policies – this is really the brains of how workflows are configured in EDMCS, and the next blog post cover this in greater detail. But here is a screenshot of the Approval Policy screen showing the available options:

Kevin Black - EDMCS and Data Governance - Part 1 - 3-8-19 Image 1

Conclusion

I hope you found this blog post helpful as an introduction to EDMCS and data governance, and that you will keep reading as the rest of the series is posted. Please contact me with any questions and comments!

And don’t forget to follow me on Twitter (@kblackEPM) and check out/subscribe to my blog (along with the blogs authored by my very talented colleagues at Alithya).

Read the next post in this EDMCS blog series:  EDMCS and Data Governance – Part 2

https://ranzal.blog/author/kblackranzal/

https://ranzal.blog/

Interested in better understanding EDMCS, the RESTful API, and Cloud Data Management? Be sure to check these excellent blog posts by Tony Scalese, aka FDM Guru: https://ranzal.blog/author/ascalese/

Looking for an outstanding resource for all things master data-related and more? Look no further!  https://datarestless.com/

Oracle Announces Removal of Support for Transport Layer Security Protocol 1.0 and 1.1; How Does that Affect Me?

Oracle has announced that as of May 3, 2019, the use of Transport Layer Security Protocols 1.0 and 1.1 will no longer be supported.  Communications to Cloud products will only be supported with TLS1.2.

The announcement was made in the following February What’s New communications from Oracle:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 1

The WHAT has come; now WHO is affected?

There are many ways to connect to the Cloud, so to better understand them, let’s break down the more popular ways of connecting and the common technology that these tools use for their connection, HTTPS:

  1. EPMAutomate
  2. Web Browsers
  3. cURL / PowerShell
  4. Financial Data Quality Management, Enterprise Edition (FDMEE)

EPMAutomate is pretty much a done deal.  If there are issues or fixes needed, Oracle will be releasing an update to go along with the Cloud deployment.  Keep an eye out on the What’s New pages as well as a notification when running EPMAutomate itself.

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 2

Both recent versions of Internet Explorer and FireFox both support TLS1.2 out-of-the-box.  It might not be enabled based on IT policies, but the functionality is present and easy to check.

Internet Explorer > Tools > Internet Options > AdvancedWayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 3

Firefox > about:config > security.tls.versionWayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 4a

Value 1 =  TLS-1.0 and a Value 4 = TLS-1.3

Now if you have ventured out into custom scripting, EPMAutomate doesn’t count in this situation, but have fully embraced REST…. then cURL and PowerShell might need some tweaks as well.  This is the start of the real reason why Oracle has started to outline and share information with the end-user community.

As a result, these solutions will need to be updated and retested.  For this purpose, Oracle has stated that you can early request, via Oracle Support, a TLS1.2-only POD for testing.  I highly recommend this, as it has provided some great insight for Alithya.  We were also able to pass along our findings to Oracle early to help stream-line the patching process of FDMEE; more on this later.

cURL scripts will need to be updated to use the ”–tlsv1.2” command when being invoked.

For PowerShell, you will need to add the following line in your scripts:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

The thing that really got me excited: FDMEE!

The last topic that Oracle mentions is if you use FDMEE on-premise.  If you are like me, an FDMEE fanatic, then you’ll know that this caused all the triggers in my brain to start firing.  All the things that I do in FDMEE will need to be tested to make sure they comply and work.  The things I use in my daily activities are:

  1. JSON based RestFUL API calls in Jython scripts
  2. Target Application Registrations to Cloud Applications

I quickly shot off an Oracle Support ticket to get myself a TLS1.2 POD.  Oracle responded in relatively short time and stated that my POD was ready, and I had it for roughly two weeks for testing.  Without any changes to my virtual-lab, I attempted to connect to see what happens.  Sure enough, I received an error:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 5

I also imported an LCM of a previous Cloud application to get around this error to see what a set of custom Jython scripts with JSON/RestFUL API would produce and received similar errors:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 6

…as well as the out-of-the-box Refresh Metadata & Refresh Members options:

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 7

I confirmed with my colleagues in Development that this was the expected result when TLS is not at the right levels and all the appropriate patches are set up and configured.  Knowing this, I also tested with my browser option disabled and received the same result.  So now that I know I have a good starting point, I was off to the races to figure out how to continue.

Unfortunately, the links that Oracle provided in the What’s New announcement appear to be broken and not public.  As a result, I had to create an SR to gain access to the information.  After I received them and did some light reading, I was able to formulate a patch strategy, apply the necessary patches, apply the registry updates, and test again.

This time I was able to run successful tests of both FDMEE scripts and Oracle adaptor connections to the Cloud.

Wayne Paffhausen - Transport Layer Security Protocol Support Removal - 2-28-19 Image 8

Great… Now what do we do?

Patching the environments was not always an easy task.  It took quite a bit of time to complete as there were multiple products that needed updates.  Most of them weren’t standard EPM (HFM, Planning, etc.) products that needed updating:  WebLogic, JRockit, JDK, OHS, etc. all needed to be updated, but because these are the building blocks on which the EPM suite runs, they caused update dependencies into the EPM products we used.

Oracle has stated that this is going into effect on May 3rd which is right around the corner.  Alithya, an Oracle Platinum Partner, is here to help you assess your current EPM installation and build that patch plan.

Even if you don’t use the Cloud today but are thinking about moving to the Cloud at some point, it is important to make sure your environment is ready and that you have the necessary support.

For more information, contact us at infosolutions@alithya.com.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Analysis Views and Scatter Analysis

PCMCS Out-of-the-Box (OOTB) Features:  2. Intelligence and Dashboarding – Analysis Views and Scatter Analysis

Two teams of consultants with similar amounts of experience and prestige guarantee that they can perform an application implementation to the highest quality: one at a higher cost, but shorter timeframe; and the other at a lower cost, but in a longer timeframe?  All other considerations being equal, should I save money, or should I save time?

A few days ago, I released my first blog post on PCMCS, covering Rule Balancing reports usage and customization. This post builds on that first post to cover intelligence capabilities, some of which are only available in the Cloud version of the PCM software.

There are 6 menu options when accessing the Intelligence menu within PCMCS.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

This post covers the first two menu options to explain how to set up Analysis Views and how to use Scatter Analysis.

Analysis Views

Analysis Views are the first set of reports available to end users within the PCMCS user interface.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 7

These views represent a way to predefine and save intersections of members for future review.  The selections within Analysis Views are open to all dimensions within the PCMCS application at various levels within the hierarchies. This is the first step you need to take towards building or defining a dashboard for your PCMCS application.

If you cannot create or edit an analysis view, then you need to reach out to your PCMCS administrator in order to review and adjust your security settings.

The example Analysis Views for this post are based on the “Demo Bikes” application that can be deployed with a few clicks in your PCMCS instance BksML30.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 8

A data slice is a combination of rows and columns along with the page selection, which, in this case, is the Period dimension.

Any dimension that is not specified in any of the 3 areas (row, column, page) will be read at top level and will be displayed in the settings menu.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 9

The Add Filter section allows you to filter the columns based on specific numerical values. In this case, the columns are represented by the Product dimension selections.

To create an analysis view, click on the plus (+) sign on the main menu. The three tabs displayed will allow you to define a name and description as well as the setup for row and column dimensions. You cannot select more than a dimension for either rows or columns.

Within the Row dimension selection, you can leverage different formulas applicable to the hierarchies within PCM such as Children of member, Member and children, Level 0 descendants, etc.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 10

Columns do not have options for member formulas beyond the usage of User preferences.

The row dimension will allow you to display further information such as generation or level details. For example, for the Product dimension, we can display the generation 3 and 4 information alongside the level 0 members, allowing us to expand our analysis to different product categories, or types.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 11

Selecting new members within the Analysis Views will not impact the original data definition. If you choose to display data for any month other than the one that was setup and saved in the Analysis view, you can do so because the Page parameter is open to end user modifications. If; however, you want to update and store a selection change within the analysis view, you must perform such update via the Edit menu instead of simply selecting a new parameter on the screen in view mode.

You may need to utilize the concept of period ranges when using Analysis Views in order to dynamically reference specific members of your Period dimension.

Defining a current period for the application is mandatory in order to be able to create formulas dependent on time. This action is available via the Application menu by selecting the Edit application option and navigating to the tab called Dimension settings. Here is where you can define the current Period and the Current Year for your PCMCS application.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 13

These settings will be applied when using the “Single…” or “Current” selection options within Analysis Views. Single (-1) Level 0 selection represents, in this case, the month of May, since the current Period selection for the PCMCS application is June. The Single (-1) Level 1 selections return Q1, since June is in Q2.

Scatter Analysis

Scatter Analysis graphs will compare one member’s values against another member’s values. The two members selected must be within the same dimension. Your PCMCS Demo application may not have any sample Scatter Analysis graphs. However, you can create one by leveraging the Analysis Views at your disposal.

You can launch Analysis Views from within Scatter graphs.

Note that saved Scatter Analysis cannot be reused or referenced in dashboards. You should use this section to create graphs for ad-hoc use outside of the dashboarding capability.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 14

If you need to include Scatter Analysis within your dashboards, you will have a corresponding menu item that allows you to create dashboards within the list of available items.

You can select an existing Analysis view, but you must reselect your X-axis and Y-axis dimension references.

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 15

Conclusion:  PCMCS Intelligence – Analysis Views and Scatter Analysis

While there are many alternative reporting solutions to use in conjunction with PCMCS applications, assuming that both time and money are of essence in any project implementation, it is safe to conclude that using the PCMCS OOTB reporting features would be cost effective as well as efficient. The Intelligence screens shared in this post are included in the PCMCS subscription cost, and any end user of a PCMCS application with the right level of access can take charge and build the desired reports, saving end users in a location accessible to their peers while spending no time in iterations of reporting requirements and data validations.

The PCMCS OOTB reporting features support not only troubleshooting, but also detailed analysis and reporting within one screen.  Such capabilities should not be ignored as they will surely add meaningful insight into finance teams’ day-to-day use of PCMCS.

If you need advice and guidance on how to leverage the PCMCS reporting capabilities for existing or future applications, reach out to our team of PCMCS experts at infosolutions@alithya.com.

The remaining intelligence menus will be covered in subsequent posts over the next few weeks. If you are interested in receiving notifications of such posts, subscribe to notifications.

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Rule Balancing Reports

PCMCS Out-of-the-Box (OOTB) Features:  1. Rule Balancing Reports

The other day, I was thinking about the times I used to study Finance, and specifically about a course regarding Interest and how it represents the value of Time. What is the cost, or value, of one’s time? – is it high, resulting in a higher interest rate per period, or is it low, resulting in a low interest rate per period? How much time am I willing to spend working in order to get that new car? How much time do I have before that competitor will outrun me and snatch that market share from me?

This was how I started thinking about various out-of-the-box features (OOTB). Such features are often key in deciding whether to acquire a software/service/product because the one resource that we constantly complain about not having enough of is “time.”

You are now reading the first blog post on OOTB features in PCMCS covering one of the most used Reports for data analysis as well as troubleshooting profitability calculation results. At the end of this blog post, you should know what Balancing reports are, where to find them, how to use them, and also how to further expand them with minimal time and effort invested.

What are Rule Balancing Reports?

Rule Balancing reports provide quick insight into the validity of the application results. These reports are powerful OOTB artifacts that can be further configured to cater to any custom application requirements in order to support validation of calculation results as well as contribution analysis and traceability.

The PCMCS OOTB Rule balancing report is initially based on a Default Model View with a standard selection of upper level members for each dimension. Starting from this Default Model View, the administrators or users of the PCMCS applications can perform a deep dive analysis on more granular intersections and configure detailed reports for a ruleset or a group of rulesets they choose to investigate.

The Default Rule Balancing report is available as soon as the application has been deployed, and it can be accessed via the Main Navigator menu found under the Manage section.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 1I will be using the default BikesML30 application to demonstrate the capabilities of the Rule Balancing reports. If you have loaded your sample application and cannot see any results in the Rule Balancing reports, check that you ran your end-to-end calculations for any given POV from the Manage Calculation Menu. The POV I have chosen for this demonstration is FY16, January, Actual Scenario.

As you open the Rule Balancing menu, the Default Model View is the only view available when you initially set up your application and your allocation rules. Any other Rule Validation reports that you see within the Demo application besides the Default Model View have been built and configured outside of the out-of-the-box list of features.

What are PCMCS Model Views?

A Model View represents a predefined data slice within the PCMCS application; consider the model views as a set of selections of members for each dimension that displays only the relevant data points for a required intersection.

Rule Balancing Report Example

After running the entire set of allocation rules within the Demo BksML30 application, the Rule Balancing report should look like this:

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 2

The description of each rule selected will be displayed along with the rule number. The rules will be displayed in the order that they were launched following the user-defined sequencing, regardless of the actual Rule Number/Rule ID that has been assigned.

  • The “Input” column enables users to confirm that what was loaded into the application matches the expected values received from the source system.
  • The Allocation In and Allocation Out columns validate the allocations performed by the application from both a balance perspective (Allocation In should be equal and opposite to Allocation Out) and a numeric one.  The balance aspect is particularly of interest when allocations are executed with custom calculation rules.  In these cases, two separate rules are typically required, one for the “credit out” and one for the “debit in.”  As such, there is a greater risk that the formulas for the outbound and inbound values will not produce amounts equal and opposite in total, thereby causing an undesired imbalance.  In these situations, the Allocation In and Allocation Out values are shown on two separate rows, and they quickly illustrate to the user the success of their calculations.

Rule Balancing and Smart View Ad Hoc Reports

Any highlighted data point/data value in the Balancing screen will allow you to further investigate the allocation step through a Smart View ad hoc report. These hyperlinks represent pre-built/pre-defined queries that point directly to the Essbase database, allowing you to further expand the analysis of a selected data point.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 3

When you click on the highlighted number, a Smart View link will be downloaded to your workstation.

As an example, you can see how the detail for Net Change looks like for the Custom calculation rule R0001 – Utilities Expense Adjustment in a Linked report in Smart View.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 4

The column headers for the Rule Balancing report will list the relevant Balance dimension members. If there are members that are not populated, these will be automatically filtered out of the view. You can choose to display them by selecting View -> Columns and tagging the members you would like to display on your report – whether they have data or not.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 5

For further information on what each of these Balance dimension members represent, check out my blog post on Demystifying the Balance dimension in PCMCS.

You can view and edit the model view definition in the collapsed area between the POV and the Balancing report.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 6

The Input data on this customized Model View is pertinent only to Operating Expenses rather than the entire pool of data. This is the reason that the total USD value may be different from data displayed on the Default Model View report.

You can perform ad hoc edits to the Model View as you are using it, but none of the newly made selections will be stored. If you want to apply permanent changes to a specific Model View selection, you will have to edit the Model View in the corresponding menu.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 7

Your Model Views can be defined in the same order of operations as your allocations, or you can choose to create Model Views that are more detailed and dive deeper into a custom grouping of rules, regardless of the ruleset to which they might belong. The only dimensions displayed in your Model View selection are the Business dimensions. POV, Balance, Rule, and Attribute dimensions are not represented and therefore are not open for selection. The data points you define in the Model view will apply to all relevant rules IDs that generated the new cells.

Enhancing and Customizing Your Rule Balancing Reports

In the Demo BikesML30 application, there are several standard Rule Balancing reports that are split by Ruleset while others are named “Trace.” The Trace Model views are built in order to support point troubleshooting of allocation areas that are either complex or open to high variation during each run.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 8

If you want to use the Rule Balancing report values outside of the ad hoc capacity, you can export the report into XLS, but remember that such an export will not represent a Smart View report – it will simply be a listing of the information presented on the Rule Balancing screen, as some members displayed here do not have a direct equivalent in the application (Running Remainder, Running Balance). This export option can be found in the Actions menu, export to Excel, or by selecting the button in the below screen capture.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 10

A new workbook is downloaded called RuleBalance, and the entire set of data displayed on your screen will be available in XLS.

Alex Mlynarzek - PCM Rule Balancing - 2-7-19 - Image 9

PCMCS Rule Balancing Drawbacks

Rule Balancing does not allow filtering based on Attributes, UDAs, or Names.

Rule Balancing hyperlinks open SmartView tabs called Linked View, and any new selections of links within the Rule Balancing report will overwrite the contents of the existing tab. If you start developing a report by using Rule Balancing, remember to always rename the tab in case you want to kick off another report for a secondary data point within the same workbook.

Common Issues When Using Rule Balancing Reports

“Rule Balancing Report Links Don’t Work”

Your workstation must have Smart View installed before using the hyperlink feature within PCMCS. The latest Smart View version is available for download through the Navigator main menu under the Installations section.  For more guidance on generic EPM product patching, read the blog post Patch Today! Don’t Delay!

When selecting a hyperlink in the Rule Balancing report, you should be able to see that a download has started. As you click on the downloaded content, a new Excel tab will open, and you will be prompted to enter your Cloud credentials in order to have access to the requested data point intersections. If you do not have Excel open at the time you are accessing the downloaded content, the prompt to enter your Cloud credentials may not appear on the screen.

“I Can’t See Any Data in the PCMCS Rule Balancing Report.”

If data is not displayed on the screen, you are looking at one of the following situations:

  1. There is no data loaded and/or calculated for the POV at the intersections you have defined in the Rule Balancing report. Check your job console to see if such tasks have been triggered and completed successfully.
  2. Your security setup is restricting you from seeing any data values. Reach out to your administrator to adjust data grants or application access.
  3. (This used to happen occasionally during on-premise implementations) If your Business dimensions are tagged as Label Only, check that the first child contains values. You may be able to see data at base level intersections within your application, yet the Rule Balancing report shows no vales due to the Dimension Type, Member Storage, or Aggregation operators you have defined in the metadata.

“I Can’t Create a PCMCS Model View.”

This restriction is based on provisioning. Reach out to your PCMCS Administrator for assistance with your profile or settings.

Rule Balancing Wrap Up

Rule Balancing reports are easy to set up and use.  They retrieve data quickly, are accessible to all application users through the same menu, and they should be the first stop during a model run to quickly identify if there were any issues with data allocations.

Because Rule Balancing is a fast reporting tool with a predefined template OOTB, it is one of the commonly used troubleshooting reports for PCMCS, which can be leveraged for quick balance checks. It is also a mechanism for quick report building at detailed Rule level, a faster alternative to reading the Rule definition and manually replicating the intersections in a Smart View report.   Because these reports are system generated and their hyperlinks are based on application and rules set-up, there is no room for manual errors when building validations.

Save precious time by leveraging the PCMCS OOTB functionality. The next post in this series covers Intelligence screens – Analysis Views and Scatter Analysis.  If you have further questions on the usage of Balancing Reports within PCMCS, please reach out to our team of PCMCS experts at infosolutions@alithya.com.