EDMCS and Data Governance – Part 3

Welcome to Part 3 – the finale – of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release.

Part 2 discusses Workflow Stages in greater detail and dives into the brains of EDMCS workflows – the Approval Policy. Approval policies at different levels of the data chain are explained, and we conclude by building a sample workflow at the dimension level.

In Part 3, I’ll attempt to tie a bow around everything and offer some parting thoughts.

Recap

As I continue to explore and learn about collaborative workflows in EDMCS, these are the key points that come to mind:

  • Emphasize the Fundamentals – No matter what tool you are using, People and Process are extremely important in any data governance solution along with strong executive sponsorship and robust change management.
  • Build the Foundation – get the client comfortable with the tool and content before you introduce workflows. A strong foundation (your applications, dimensions, views, and viewpoints) is needed before you start the plumbing and wiring (workflows).
  • Brush up on Security – I haven’t discussed security extensively in this blog series, but the Oracle EDMCS User Guide does a nice job describing security requirements for assigning and approving workflow requests. Note that security enhancements have been introduced along with workflows. A new “Submitter” permission is now available to go along with Owner, Data Manager, and Browser. And permissions can be assigned at the Application, Dimension, Hierarchy Set, and Node Type levels.
  • Ponder the Approval Policy – this is the most interesting one to me. As we discussed in Part 2, approval policies can be defined at 4 points in the data chain (see Figure 1). With the inheritance and inter-dependencies of approval policies across the data chain along with the actions each policy can govern, it is critical to efficiently design your approval policies up front.

o   For example:

  • Suppose your client requires a final “audit” type of approval across the board for any type of request for any dimension. Or they always a require an upfront “gatekeeper” type of approval to make sure the request is justified and complete before it continues down the approval chain. These would be good candidates for an approval policy at the Application level. And it would avoid having to define duplicative approval policies at lower levels in the data chain.
  • Will your application contain dimensions that do not need data governance workflows? Then Application level approval policies should be avoided.
  • Say you want to limit and govern the actions of a specific group so it can only work with existing nodes (insert, remove, update). An approval policy at the Hierarchy Set level is probably best.

o   Overall, I believe approval policies at the dimension level are a good place to start. Then as the workflows evolve and requirements become more clear, you can determine if there are common factors across all dimension approval policies that can be consolidated at a higher level (Application level approval policy), or if there are specific subsets of actions that need to be broken out to a lower level (Node Type or Hierarchy Set level approval policy).

o   All of which brings up another interesting point: effective approval policy design directly ties into effective viewpoint design. Think about it – you can define the set of Allowed Actions (Add, Insert, Move, etc.) at a Viewpoint level. Which means what? Special-purpose maintenance views are likely required to support certain approval policies, especially those at the Node Type or Hierarchy Set levels.

Figure 1 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 3 - Image 1

How do EDMCS Workflows Compare with DRM/DRG?

I was reluctant to include this section at first because in general, I don’t like comparing Data Relationship Manager (DRM) and EDMCS. Yes, they are both master data management tools and yes, they do share some common concepts and terminology. But overall, the two products are so different in terms of philosophy, deployment design, and underlying architecture that I think comparing the products is often less than helpful.

However, with data governance and collaborative workflows, I feel there is enough commonality that it is worth highlighting a few items. So here goes:

Topic DRM/DRG EDMCS
Workflow Design
  • Based on workflow models and workflow tasks
  • Tasks linked to specific actions (Add Leaf, Add Limb, Insert, Move, etc.)
  • Based on Approval Policies
  • Approval policy level (Application, Dimension, Node Type, Hierarchy Type) determines context and scope of actions governed

 

Workflow Stages
  • Use a Submit stage, a Commit stage, and optionally, one or more Enrich and/or Approve stages
  • ·Use a Submit stage and (implied) Commit stage
  • Approval policies determine approval stages (sequential vs parallel, # of approvers)
  • Requests can be re-assigned for collaboration prior to Submit
User Interface (UI)
  • Form-based design
  • No forms
  • Requesters and approvers interact directly with the viewpoints
Approval Options
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
  • Support Approve, Reject, and Push Back
  • Support comments, narrative, attachments
Escalations
  • Requests can be escalated based on defined intervals
  • Requests can be escalated based on defined intervals
Separation of Duties
  • Workflows can be configured to prevent a submitter from approving their own request
  • Workflows can be configured to prevent a submitter from approving their own request
Email Notifications
  • Generates email notifications
  • Generates email notifications
Other
  • Supports conditional workflows
  • Supports splitting of requests based on pre-defined criteria
  • Not yet supported

I’m curious if Oracle will introduce a form-based UI for workflows. Part of me would very much like to see that so that you can present a clean user interface to the approvers, hide unnecessary details, and display special instructions and messages, but part of me does not. One of my favorite features of EDMCS is the visual highlighting of pending request changes and the “shopping cart” of request items that are displayed prior to submitting a request. I would hate to lose that by going with a forms-based workflow UI, but perhaps there is a solution that combines the best of both worlds. 

Conclusion

Well that’s it, an initial look at workflows and approval policies in EDMCS. I’m excited to see how this functionality evolves and expands over time. Talk to you next time!

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

EDMCS and Data Governance – Part 2

Welcome to Part 2 of the blog series “EDMCS and Data Governance!”

Part 1 provides an introduction and primer for data governance workflows in Oracle Enterprise Data Management Cloud Service (EDMCS) which was introduced in the 19.02 release. This exciting feature addresses a major gap in EDMCS as the product continues to rapidly evolve and mature.

In Part 2, we dive into the details of how to configure workflows. This process revolves around the concept of an “approval policy.” Interestingly, approval policies can be configured at different points of the EDMCS data chain and cascade or inherit to affect downstream points of the data chain.

Workflow Stages

Before we dive into approval policies, let’s discuss EDMCS workflow stages a bit more. They are similar in concept to Data Relationship Governance (DRG) workflow stages. See Figure 1 for an overview:

Figure 1 – EDMCS Workflow StagesEDMCS and Data Governance – Part 2 - Image 1
  1. Submit (or Assign) Request – A request is initially created as you do today. But wait…there’s more! You can Submit the request to immediately move the request into the Approve stage OR you can Assign the request to colleagues to collaborate on the request together. When the request is ready, it is submitted to move to the Approve stage.
  2. Approve Request – The approver(s) have 3 choices:
    • Approve – the request is approved and moves forward (thanks Captain Obvious!).
    • Push Back – like DRG, the request is pushed back to the submitter for clarification or changes, who then updates and resubmits the request.
    • Reject – like DRG, the request is denied and closed. Think of “reject” as the RAID of the data governance world – it kills requests dead.
  3. Commit Request – once fully approved, the request is auto-committed and closed. EDMCS has now been updated.

Approval Policies

Now for approval policies. Approval policies can be configured at 4 levels:

  1. Application
  2. Dimension
  3. Node Type
  4. Hierarchy Set

It is important to note that each data chain object can contain one, and only one, approval policy. However, approval policies have a cascading impact so that multiple approval policies can work in concert to govern and control exactly what you want. Yes, you heard that right:  Approval Policy Inheritance – it’s not just for properties anymore!

The types of actions governed by an approval policy depend on the data chain object it is configured with – see figure 2 below:

Figure 2 – Approval Policies and Data Chain

EDMCS and Data Governance – Part 2 - Image 2As you can see, policies defined at the Application or Dimension level govern all actions (add, delete, insert, remove, move, etc.) while policies defined at the Node Type or Hierarchy Set level govern a subset of actions. Why is this important? Because it means you need to carefully design what types of actions you want to govern and who will perform them. If I define an approval policy at the Hierarchy Set level and then submit a request that Adds 3 accounts, how many approvers are required for the request? A big ZERO! Since I requested “add” actions and only have an approval policy at the Hierarchy Set level, no applicable approval policy exists to govern the request.

Putting It All Together

Let’s walk through an example.

  1. Define Approval Policy

First, I will define an approval policy for the Account dimension. To do this, Inspect either the application or default viewpoint and access the Account dimension from the Definition tab. From there, click the Policies tab.

Here you will see the Approval policy for the Account dimension. Click on the Approval link to inspect the approval policy.

EDMCS and Data Governance – Part 2 - Image 3The General tab will display basic information about the approval policy. You can edit the approval policy name and description if necessary.

EDMCS and Data Governance – Part 2 - Image 4The Definition tab is where the magic happens. Select edit to update the following parameters:

  • Enabled – click this check box to enable the approval policy.
  • Approval Method – select Serial or Parallel.
  • One Approval Per Group – if using Serial approvals, this will automatically be set to “True.” If using Parallel approvals, you can select one approval per group or define a Total Required # of approvers.
  • Include Submitter – enable this to allow the submitter to also be an approver (the submitter’s approval will be automatically granted). If “separation of duties” is required for your company, do not enable this.
  • Reminder Notification – the # of days that will elapse before reminder emails are sent.
  • Approval Escalation – the # of times a reminder occurs before an escalation email will be sent.
  • Approval Groups – select user(s) and/or group(s) to be included in the approval process. When using Parallel approvals, the order of approval groups does not matter. When using Serial approvals, the order of approval groups does matter – you need to list the approval groups in the order that approvals should be executed.

With my example approval policy, I am using serial approvals, 2 approval groups (a Planning group and GL group), a reminder interval of 5 days, and an escalation interval of 2 reminders.

EDMCS and Data Governance – Part 2 - Image 5

  1. Submit Request

Now we’re cooking with gas. It’s time to submit a request. I will submit a request to my default Account viewpoint that includes 1 add, 1 property update, and 1 move. Here is the request in Draft status:

EDMCS and Data Governance – Part 2 - Image 6

Did you notice something new? Look at the Actions button next to Submit. This is where you can assign the request to another user and collaborate with him to finish up the request.

EDMCS and Data Governance – Part 2 - Image 7

EDMCS and Data Governance – Part 2 - Image 8

  1. Approve the Request

After the request is submitted, it is considered “in flight” because it has been submitted, but not yet approved/committed. And look! EDMCS now offers a nice Activity page on the home screen displaying the status of various workflow requests:

EDMCS and Data Governance – Part 2 - Image 9

First, the users in the Planning Approvers group will receive an email notifying them that they have been “invited to approve a request” (it’s very polite):

EDMCS and Data Governance – Part 2 - Image 10

As mentioned earlier, an approver has 3 choices: Approve, Reject, or Push Back. Reject and Push Back are available under the Actions dropdown. Here are the dialog windows that will be displayed for those actions (note the comment field is required):

EDMCS and Data Governance – Part 2 - Image 11

Otherwise, the approver will click the Approve button and see this:

EDMCS and Data Governance – Part 2 - Image 12

And then the same process will continue with the GL Approvers group since I am using Serial approvals. Once again, an approver can reject, push back, or approve. Once approved, the request is committed and closed.

Congratulations! You have now completed your very first data governance workflow request in EDMCS!

Conclusion

This blog post should be useful in providing more details and clarity on workflows, workflow stages, and approval policies. In the third and final post for this series, I’ll offer a recap and some closing thoughts. Talk to you then.

And don’t forget to follow me on Twitter (@kblackEPM) and check out these links for more information:

Implementing Zero Based Budgeting: Setting Up Your Environment

The previous post – Implementing Zero-Based Budgeting: The Requirements – outlined two key components of a successful zero-based budgeting program:  a culture change and a centralized system. We recommended creating a centralized system with Oracle Planning and Budgeting Cloud Service (PBCS)/Enterprise Plainning and Budgeting Cloud Service (EPBCS) because of the many advantages it provides such as an environment with data depth.

Even with a zero-based budgeting blueprint, many companies are still hesitant to go “all in” thinking that a zero-based budgeting program implementation requires too much time and resources. The introduction of Cloud services such as Oracle PBCS/EPBCS makes the implementation of a centralized financial system easier than ever, greatly reducing the barrier to entry.

This final post in this series shares the power of a PBCS/EPBCS environment to achieve the greatest success with a newly implemented zero-based budgeting program.

How Can PBCS/EPBCS Environments Enhance the ZBB Experience?

There are four key ways to gain the most from a PBCS or EPBCS environment, including the setup of targets and accountability metrics that offer more meaningful data and greater transparency when making budgeting decisions.

Clients are often given target settings goals in management meetings or over the phone, but we demonstrate for them how to integrate this into their budgeting systems. On numerous occasions, Alithya has been contracted to implement target settings where leadership sets growth targets and the systems flows down the revenue by service, product line, etc. In turn, analysts match the underlying details.

Not surprisingly, this is a common request because target setting has been a long-time tradition during the budget process. By setting up this target setting process in PBCS\EPBCS, an off-line process is instead online and is molded with the overall budgeting system process.  Combining that with the zero-based budgeting mantra allows targets to be set and provides analysts with their needed baseline.  Moreover, analysis can be done on departments that take the typical “reduce expenses by 10% approach” to archive the target number instead of the more insightful zero-based budget journey.  Yes, target setting in a centralized system is easier, but the benefit of a centralized system is the ability to see how teams react to the new target.  Did they take the traditional “reduce budget percentages to fit the numbers,” or did they look at their budget as a whole and analyze each line item and question the numbers organically?

After targets are set and the budget is approved, we look at the said cost saving come to fruition.  A centralized system allows capital projects or initiatives to be tracked to help systematically measure the expenditures of cost savings activities found during the zero-based budget discovery. This provides a clear picture of what each department is doing and holds them more accountable for project decisions. It is an achievement to complete a zero-based budget “diet,” but holding teams accountable brings them to the next level of the zero-based budget “lifestyle.”

In essence, this new budgeting environment provides better insight into data – insight that ultimately allows savings to be found more effectively. For example, if you want to see the cost of direct materials, this centralized system can be set up to capture the costs in order to analyze and keep track of the different KPIs that reduce or increase overall costs.

Another example of how this works is by segmenting down employee costs such as travel. Instead of having a run rate of 10% of direct labor or travel costs, determine what job or tasks required that travel and use this KPI to negotiate travel expenses to further drive down costs.  Essentially, use PBCS/EPBCS as a tool to capture KPIs (e.g. travel costs by job) and determine the best use of travel dollars and – more importantly – negotiate with vendors on key travel.

Lastly, a budgeting environment provides clarity to help teams make better informed decisions about future initiatives. With the ability to see all of the underlying data points in a single location, it is possible to identify past sales and marketing campaigns and expenditures that led to profitable customers. Therefore, zero-based budgeting teams that took the initiative to determine the best sales and marketing costs to benefit analysis from the ground up are able to dedicate more resources (e.g. dollars, people, etc.) to winning strategies.  This is in contrast to the traditional budgeting approach of “10% rate of marketing spend year-of-year” that often masks the winning and more importantly losing marketing initiatives. Moreover, such planning and availability of different data points helps draw key inferences that allow sales and marketing teams to be more successful.

Summary 

Utilizing a Cloud service such as Oracle PBCS/EPBCS makes it easier for companies to implement a centralized system and achieve success with a zero-based budgeting program. PBCS/EPBCS environments can and should be set up in a way that enhances the zero-based budgeting experience. This is achieved by integrating target setting goals and establishing accountability metrics that allow a deeper dive into budget data while providing greater transparency to make better informed decisions.

To learn more about zero-based budgeting best practices and to get professional help with your Oracle PBCS/EPBCS environments, feel free to contact our team of experts.

Oracle’s ARCS Patch 1812 and Patch 1811 Review: Gazing into the Crystal Ball

Peruse the Account Reconciliation Cloud Service (ARCS) forums on Oracle’s Cloud Customer Connect and you’ll notice a theme: Transaction Matching. Questions, comments, and critiques have been flooding in from across companies and industries, clients and consultants alike. Combine this with Oracle’s game-changing announcement of the EPM Cloud price simplification plan teased for 2019 – that is, the strategic move to strictly sell bundled EPM Cloud products in the near future (more on this another time – it’s a doozy) – the changes released for ARCS in Patches 1811 and 1812 could not have come at a more opportune time. Furthermore, these changes provide a sneak peek into Oracle’s crystal ball of what’s to come.

The WHAT has come: Changes for ARCS at the end of 2018

The most important change to ARCS from the 2018 season finale, Patch 1812, is the shift from having separate reconciliations between Transaction Matching and Reconciliation Compliance to one standardized use of Profiles. This is configured through the new reconciliation methods provided in Formats (Balance Comparison with Transaction Matching, Account Analysis with Transaction Matching, and Transaction Matching only).

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 1

The implication is that Transaction Matching reconciliations receive all the benefits that previously only Reconciliation Compliance enjoyed, including but not limited to: bulk uploads/updates to Profiles and reconciliations, access to new Workflow options such as Reviewers and Teams, and detailed filtering options including the more hidden statistical metrics (such as attributes related to count, etc.). It is important to note, though, that these new features will almost exclusively relate to new reconciliations using one of the two ‘*with Transaction Matching’ format options, as seen below. Still, the opportunity for clever design is there.

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 2

Oracle’s ARCS Patch 1812 and Patch 1811 Review - Gazing into the Crystal Ball Image 3

Furthermore, to support this change, Period will now be shared between the two feature sets. Additionally, reconciliations that are performed in Transaction Matching will now utilize their period-end Balances loaded to Reconciliation Compliance. While historically there have been business processes put in place to ensure that the balance loaded to Transaction Matching equaled the balance loaded for the month-end reconciliation in Reconciliation Compliance, patch 1812 ensures that a system process governs the data’s integrity – certainly a more reassuring thought.

Two additional under-the-radar features introduced in Patch 1812 are (1) the ability to have Workflow that includes multiple members while not requiring an order precedence to the work and (2) the option to now have end-users approve their own re-assignments, reducing the administrative bottleneck. These changes provide value-add functionality that demonstrate Oracle’s willingness to listen to customer feedback even during these more “stuffed” patches.

The last item to mention was actually included in Patch 1811. In Transaction Matching, a text file can now be generated with the transactions or adjustments from the tool which can then be uploaded to the ERP source systems as a journal adjustment. This has been an ongoing request, and I am happy to see it finally actualized.

The WHAT does it mean: Implications and Expectations for ARCS in 2019

Transaction Matching’s relative strength to its competitors is becoming increasingly apparent, as Oracle continues to sure up areas in need of support while also providing updates that show a sensitivity to market demand. The move to unify Transaction Matching and Reconciliation Compliance is not a new idea, as Patch 1805 made apparent with the uniting of the two UIs (and much more – see Oracle Product Management’s webinar update here), but nonetheless is a bold one that I anticipate will pay dividends. The automatic conversion of Transaction Matching reconciliations to Profiles is a nice touch too, making the transition an easier pill to swallow for skeptical clients who I am sure were not eager to pay expensive consulting fees for this. Even smaller changes such as providing a space for strictly manual matching (i.e. without Auto Match rules; Patch 1811 change) demonstrate ARCS’ commitment to be an approachable and modular product that grows with your company – a benefit I have consistently touted in the past, and I expect to continue to do so in the future.  More details about the benefits of ARCS are shared in the posts A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS) and Modularity in Account Reconciliation Cloud Service (ARCS): No Mistakes from “Day 1” to “Day 100”

Changes continue to come to ARCS that only slowly trickle, if at all, down to Account Reconciliation Manager (ARM). This was true for the Variance Analysis reconciliation method which arrived in May 2017 for ARCS, but not until Dec 2017 for ARM, and it is a fair guess that this will be true for the aforementioned “All Preparers” and “All Reviewers” workflow options and end-user re-assignment configuration setting. Combine this with more and more dollars being invested in Transaction Matching compared to Reconciliation Compliance (from where I’m looking, anyway), and the message is clear on who the favorite is in the Oracle product family. While ARM contains strong functionality as an on-premise option, expect the functionality gap to increase compared to its Cloud counterpart.

Lastly, the inclusion of a journal adjustment export out of Transaction Matching is a combo solution: a “we can do that too” to product competitor Blackline’s existing functionality as well as a demonstration of Oracle’s willingness to think outside of the product. This highlights ARCS’ flexibility as a tool capable of being used within other processes. In fact, the Oracle EPM Cloud ecosystem is one of ARCS’ biggest strengths over its competitors.  I would love to see this journaling ability out of Reconciliation Compliance as well which would provide the functionality to most ARCS clients. Regardless, this is a step in the right direction.

This post has been cross-posted on the #DataRestless blog site – read it here and other Oracle-related posts as well.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing

Oracle’s Profitability and Cost Management Cloud Service (PCMCS) provides a powerful service for allocating General Ledger profits and costs.  Recently, we worked with a banking industry client to provide a model that calculates profitability at a Product/Channel level while maintaining Account level detail.  We accomplished this through a framework we refer to as Micro-Costing where detailed profits and costs are calculated in a database using rates developed at the summary level in PCMCS.  Alithya began development of this framework in 2016 to meet a functional gap in PCMCS and provide a common framework that can be used either on-premise or in the Cloud.

To highlight the capabilities of Micro-Costing, I will use the solution deployed at our banking client as a specific example.  The following table describes the two layers where profits and costs are provided:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 1

 Definitions:

  • Product – a loan or deposit offering. Examples of a loan are an auto loan or credit card; examples of a deposit are a savings account or a checking account.
  • Origination Channel – where the account was originated.
  • Service Channel – where the financial or transactional cost or profit is occurring or assigned to.
  • Customer – a legal entity responsible for accounts; for example, a person with both a home loan and a savings account.
  • Customer Account – a product that is assigned to a customer.
  • Financial Costs and Profits – the cost or profit of servicing a loan or deposit for a customer; for example, interest paid on a savings account.
  • Transactional Costs and Profits – the cost or profit of interacting with a customer; for example, the cost of an ATM transaction.

A simple way of thinking about the client’s business model:

  • Origination channels offer Products
  • Products are assigned to Customers as Customer Accounts
  • Customer Accounts are used by Customers through Service Channels

The generation of an Account level profit or cost is a C = A*B calculation where

  • A is the driver
  • B is the rate of a driven value
  • C is the driven value (profit or cost)

An example is:

ATM Expense = ATM Transaction Count * ATM Expense Rate

Micro-Costing Diagrams

Data Model

This summarizes the data model deployed.

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 2

STAGING – Contains transient data.

OPERATIONAL DATA STORE (ODS) – Persists the operational data with minimal transformation.  Dimensional integrity is not enforced, but validation jobs are available for validating stored data regarding rules and dimensional integrity.

WAREHOUSE-STAR – Persists the drivers, the rates, and the calculated profits and costs at the Customer Account level.  The Driver Lookup and Driven Value Lookup functions are used to define the drivers and driven values so that the addition of a driver or driven value is a configuration activity for an administrator rather than a coding activity.

Data Integration

A high-level summary of the data flows as deployed:

PCM Micro-Costing, a Framework for Detailed Profitability and Costing - Image 3

The source data is broken down into 3 types:

  1. General Ledger
  2. Operational Data
  3. Metadata

Data Integration uses interim flat files to maintain flexibility regarding the source data by establishing an API via the flat files without requiring knowledge of the source systems.  This allows for the introduction of source data that comes from 3rd parties not available for automated extraction from the source.

The operational data includes both Customer Account financial information and transactional activities or fees.  Product and Channel references are provided along with this information:

  • 1 million+ Customer Accounts
  • Approximately 6 million transactions per month

Some transactional drivers represent an activity that cannot be associated with a specific Customer Account; for example, a new loan application.  Proxy Customer Accounts for each product are generated to provide a place for these activities.

Additionally, although not graphically displayed in the above diagram, Branch level drivers are directly fed into the PCM Model, examples of which are Branch square footage and number of branch employees.  These drivers were used for non-Customer Account PCM costs and profits.

All Batch processing is built using SQL Server Integration Services.  This is based upon an agreement with the client regarding the preferred tool sets with the database selected being SQL Server.  Framework is transferable to other integration tools and databases including Hadoop framework, and in-house solutioning by Alithya was performed in preparation for use of the Micro-Costing framework with larger clients.

The data integration is as follows:

  1. Set POV
  2. Update metadata and stage
  3. Stage financial and transactional information
  4. Validate staged data and reprocess as necessary
  5. Load staged data to ODS and then to Star
  6. Upload PCMCS with GL and drivers
  7. Process allocations in PCMCS
  8. Download rates
  9. Run A*B calculations for each Customer Account and populate profit and cost table

Key Design Principles

The following design principles were focused on during development of the Micro-Costing framework.  These principles facilitate an easy-to-use and easy-to-maintain solution as deployed for our client.

  • Dimensional synchronization between the Micro-Costing warehouse and PCM
  • Validation checks as close to the original data as possible
  • Configurable drivers and driven values

Dimensional Synchronization

All dimensional mapping must occur prior to the warehouse star schema.  It is not possible to perform the Micro-Costing A*B calculations to derive profits and costs detail otherwise.  This has an impact on any deployment that uses FDMEE or Cloud Data Manager as they cannot perform additional mappings during upload to the cube.

Dimensional Synchronization includes a Point of View: Year, Period, Scenario, and Version to allow for loading multiple sets of drivers during a month, and for transfer of ‘what-if’ rates back to the Customer Account level, if desired.

Validation Checks

Validation kick-outs and checks occur as early in the data integration process as possible, with a “simple” validation during staging and a “complex” validation during generation of the fact information in the warehouse.  This allows the administrator to catch quality issues with a minimum amount of overall process duration occurring.  The data integration process is broken into a series of steps that allows for validation review and then re-running a step prior to moving on to the next step.  This principle held up in deployment, ensuring that time wasn’t wasted running later processes with invalid data, the result being an improved overall process and a significant reduction in the number of days required to produce profit and cost analysis for a given month.   A lesson learned during the initial roll-out was that our client had not previously required a rigorous validation of the drivers at the Customer Account level and had to develop new techniques for validating the source information to ensure accuracy.

Configurable Driver and Driven Values

A key feature of Oracle’s PCM applications is configurability, and the Micro-Costing framework is built to provide an easy-to-maintain solution that allows for rapid addition of drivers and driven values without the administrator having to manually update the tables and views required to manage the transformation and persistence of data.  This was accomplished by defining the drivers and driven values in tables and providing stored procedures for maintaining the tables and views.

The process for adding a new driver and driven value is very straightforward:

  1. Backup the database and the PCM cube.
  2. Update the source feeds to include the new activity or fee.
  3. Update the activity to Driver Lookup and Driven Value Lookup tables with the new values.  *Note: The driven value record references the driver for the A*B calculation.
  4. Execute the “Update Costing Tables and Views” stored procedure. *Note: removing a driver or driven value does not modify the tables.
  5. Update HPCM Account dimension for the new driver and driven value.
  6. Update HPCM rules to use the new driver and allocate expenses to the new driven value, and calculate the rate for the new driven value.
  7. Run the entire data integration process for the POV, and review results.

Key Benefits

The successful deployment of the solution provides the following key business benefits:

  • An improved ability to provide Product/Channel level costs and profits.
  • Reduced monthly cycle time and effort. The prior data integration process was disjointed and required a large amount of effort to produce results.
  • Drill-through capability to Customer Account level drivers, profits, and costs allows for root cause analysis of Channel and Product Costs.
  • Aggregation along other dimensional paths. Starting at the Customer Account level allows for aggregation along Customer attributes such as zip-code or credit score, providing new insights and enhanced executive decision making.  A follow-on project to use the Customer Account level data in OAC is currently being assessed.

Additionally, the following benefits to the administrative team are realized:

  • Model flexibility. The configuration of an additional driver and driven value in Micro-Costing takes fewer than 15 minutes.
  • Operational Data Store (ODS) and Warehouse. This allows for future projects to use a common curated source of information.  This was a pot sweetener for our client who was dissatisfied with its prior warehouse, but needed a business reason to refresh.  The prior warehouse lacked the following items that were addressed in the new ODS and warehouse:
    • Explicit mappings such as Activity Code to Driver Code that are controlled by the business
    • 3rd party data from partners and industry sources
    • Consolidation of financial and transactional information into Customer Account level facts
    • Hashing of Personally Identifiable Information (PII) for account security
  • Easy troubleshooting, validation, and auditing capabilities with PCM. Errors or mismatches in profit or cost at the Product/Channel level can be reduced to either rule definition mistakes or driver data entry mistakes. Finding out where the issue is and correcting it with a few clicks has a positive impact on the overall analysis and maintenance effort.

Final Thoughts

Alithya has developed a Micro-Costing framework that allows an integrated view of profits and costs at both a summary and detailed level.  This framework is successfully deployed at a banking industry client to provide a superior solution.

Framework is deployable either on-premise or in the Cloud and is available for other industries such as:

  • Patient encounters in Healthcare
  • Claims in Insurance
  • SKUs in Retail
  • Subcomponents in Manufacturing

…or anywhere the allocations occur at a summary level with drivers aggregated from a detail level.

 

Retro Reboot #1: Set It & Forget It – Scheduling FDMEE Tasks

As with most nostalgic items, reboots are the next best thing. From video game consoles to television shows, they are all getting a modern facelift and a new prime-time seat on television.  I have jumped on that band-wagon to revitalize a previous post authored by Tony Scalese: Set it & Forget It – Scheduling FDM Tasks.

As with most reboots, there must be flair and alluring content to capture old and new audiences. Since Oracle Financial Data Quality Management Enterprise Edition (FDMEE) has been in the Enterprise Performance Management (EPM) space for a while and has moved into the Cloud, this is a great time for its reboot!

Oh Great…A Reboot. Now What?

Scheduling tasks in FDMEE has never been easier. Oracle provides several ways to do this for a variety of out-of-the-box activities.  Is there a report that you want to run and email every hour?  Or how about a script that needs to run hourly?  Or maybe batch-automation every 15 minutes?  No worries!  FDMEE can handle all of that with out-of-the-box functionality.

Let us pause for a moment and determine what is needed to make this happen:

  1. Is there a business case and justification for what is about to be scheduled?
  2. Who benefits and how will they be notified of the results?
  3. Is there a defined frequency for which the activity must take place?

Getting Started

First, understand that the scheduling for FDMEE is built directly into the Graphical User Interface (GUI) anywhere you see the “SCHEDULE” button. Unlike the previous FDM counterpart which had it as an independent utility to be installed/configured, the ease of having it via the Web has removed some complexity.

A word of caution:  while this screen allows items to be scheduled, there isn’t a screen that shows “what has been” scheduled.  To do that, access to the Oracle Data Integrator (ODI) is needed, but more on this later.

Initially, the screen shows the types of schedules that can be created and their relevant inputs.

Retro Reboot Screen Shot 1

Below is a reference guide to outline FDMEE’s scheduling capabilities.

Schedule Type Inputs Notes / Examples
Simple TimeZone, Date, HH:MM:SS, AM/PM Single run based on the specified inputs.

 

Example:  Run 08/02/2018 @ 11AM

Hourly TimeZone, MM:SS Repeatable run at the specified time MM:SS time.

 

Example:  Run every hour, at the 22minute mark.

Daily TimeZone, HH:MM:SS, AM/PM Every day at the specified time.

 

Example:  Run every day at 11AM.

Weekly TimeZone, Day of the Week, HH:MM:SS, AM/PM Every specified day at the specified time.

 

Example: Run every Monday thru  Friday at  11AM.

Monthly
(day of month)
TimeZone, Date, HH:MM:SS, AM/PM Specified day at the specified time.

 

Example: Run on the 2nd day of every month at 11AM.

Monthly
(week day)
TimeZone, Iteration, Weekday, HH:MM:SS, AM/PM Specified interval and week day at the specified time.

 

Example: Run every third Tuesday at 11AM.

Why Does the Job Run Under My UserID?

That is because the system assigns the user’s credentials who created the schedule. What can go wrong with that, right?!  Well, if a user no longer exists or a password is changed, the existing jobs will no longer run.

The following considerations should be observed:

  1. Dedicate a service account that is not being used by an employee to be used for server/automation actions.
  2. This account can be a “native” user; since the account is only used internally for EPM products, having a domain account is not needed.
  3. Non-expiry passwords are best.

 It is Scheduled…Now What?

After the item is scheduled, what really happens? The action executes at the scheduled time!  Actions can easily be monitored via the FDMEE Process Details screen.  Now all the possibilities of scheduling the following can be explored:

  1. Data Load Rules
  2. Script Executions
  3. Batch Executions
  4. Report Executions

Also, as mentioned earlier, there is no way to see the batches inside of FDMEE. For that, information can be retrieved in a few ways.  The easiest way to see what is scheduled is to use the ODI Studio.

The ODI Studio provides details as seen in the screen shot below:

Retro Reboot Screen Shot 2

Any scheduled tasks will be listed under “All Schedules.” Simply double click them to obtain details related to that task.

Retro Reboot Screen Shot 3

Another effective option is to write a custom report that displays the information. My previous blog post, Easy Value with FDMEE Reports, provides further details of FDMEE report options and their value.  This would allow a report to be executed to provide a user-friendly report.

Seriously … What Now?

By now, you may have noticed from the previous blog post Scheduling FDM Tasks – A Second Option by Tony Scalese that the upsShell process is quite handy.  It allows other tools to control the FDM jobs…maybe through a corporate scheduler.  Now that most organizations have a corporate scheduler, the new FDMEE options below must be learned:

Command Purpose
Executescript.bat / .sh Executes an FDMEE Custom Script
Importmapping.bat / .sh Executes an import from text-file for Maps
Loaddata.bat / .sh Executes a Data Load Rule
Loadhrdata.bat / .sh Executes an HR Data Load Rule
Loadmetadata.bat / .sh Executes a Metadata Load Rule
Runbatch.bat / .sh Executes a defined Batch
Runreport.bat / .sh Executes a defined Report

*All files are stored in the EPM_ORACLE_HOME\products\FinancialDataQuality\bin\

In the example below, the command, when launched, executes a Data Load Rule for Jan-2012 thru Mar-2012:

Retro Reboot Screen Shot 4

There still must be a better solution…right? Things to overcome:

  1. What happens if the scheduler is Windows-based and the server is Linux?
  2. How does a separate scheduling server communicate with EPM? Does it have to be installed on each EPM Server?
  3. How can we monitor and get details of a job once it is kicked off?

What Happens if You Don’t Want to Run the .BAT/.SH Files?

You’re in luck! With the introduction of new functionality to FDMEE, RESTful APIs are also now available.  With the RESTful APIs, not only can you execute a job, but you can also loop and monitor for the results.  This enhances the previous .BAT/.SH file routines and provides a cleaner and more elegant solution.

Command Purpose
Running Data Rules Execute a Data Load Rule
Running Batch Rules Execute a Batch Definition
Import Data Mapping Import Maps
Export Data Mapping Export Maps
Execute Reports Execute a Report

*URL construct: https://<SERVICE_NAME>/aif/rest/V1

The below example is just querying for a process:

Retro Reboot Screen Shot 5

The Future…

As Oracle moves forward to enhance the RESTful APIs, many doors continue to open for FDMEE and tool scheduling. At Edgewater Ranzal, we fully embrace the RESTful concept and evolve our solutions to utilize this functionality.  The result is improved support and flexibility of FDMEE and the future of Oracle Cloud products.

Contact us at info@ranzal.com with questions about this product or its capabilities.

Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

In the previous post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, I showed you how to rebuild ARCS down to the Profile Segments to speed things up. This time we’re slowing everything down…

So grab a glass of wine and throw on your Marvin Gaye vinyl because we’re getting it on with ~~automation~~. Oh yeahhh…

The sexiest topic of account reconciliations (didn’t think you’d ever see that sentence, did ya?) consistently revolves around automation. Yes, ARCS provides a central repository. Yes, ARCS is auditable. YES, ARCS shows a traceable workflow throughout the reconciliation cycle. All of these features are highly useful and absolutely a prerequisite to an enterprise worthy solution, but if you want to really grab people’s attention in a design session, start talking about the things they won’t have to do. ARCS provides both out-of-the-box functionality as well as customizable tools that help preparers  focus on high-importance reconciliations rather than spending time on low value-add or monotonous items.

Automation occurs in two areas: outside of ARCS (e.g. data feeds) and within ARCS (e.g. auto reconciliations and rules). Setting up the former enhances the latter. Either Cloud Data Management (CDM) or Financial Data Quality Management Enterprise Edition (FDMEE) can be used to load data to ARCS, albeit in different manners, but how this is accomplished is beyond the scope of this post. This data can be sourced from a variety of general ledgers and sub ledgers/subsystems including Financials Cloud, E-Business Suite (EBS), PeopleSoft, JD Edwards, and even *gasp* Excel (…if we have to…). By automating these data feeds directly from the source, management can be confident in the validity of the data (e.g. accuracy, no manual intervention or “massaging,” live, etc.) and, with scheduling, administrators have one or more fewer task(s) to worry about. The latest application data is up-to-date by the time the office doors open. Additionally, data refreshes can occur multiple times throughout the reconciliation cycle without concern for loss of work. ARCS will only update reconciliations with differences from the last data load and will change the workflow status if data has been modified and needs to be looked at again.

Within ARCS, the “bread and butter” for gaining efficiencies in the reconciliation cycle is through utilizing the out-of-the-box auto reconciliation method property on the Profiles. This will set the conditions under which the reconciliation will automatically change the workflow status to “closed,” allowing preparers to focus on the remaining “open” reconciliations that require attention. Which conditions are available for selection depends on the Format type. Furthermore, this field can be easily updated after-the-fact. Using the Actions pane, this property can be updated to a mass of Profiles based on custom filtering.

Automation in ARCS 1

[Screenshot 10a: The “Set Attribute…” functionality from the Actions pane is a powerful tool that can be used to make mass updates from the user interface.]

 

Automation in ARCS 2

[Screenshot 10b: In this example, the “Set Attribute…” functionality can be used to make updates to the Auto Reconciliation Method property for all Profiles, selected Profiles, or Profiles that fit customized criteria.]

The “Set Attribute” functionality is a powerful tool for making changes across multiple Profiles within the ARCS user interface. In many instances, this is a preferable alternative to extracting the Profiles to a text file to modify offline. Screenshots 10a – 10b show how it can be used to update the Auto Reconciliation Method attribute specifically, but there are a plethora of other attributes that can be updated in this manner.

The last puzzle piece to the trinity of automation is customized rules. Similar to custom attributes, rules can be added in a variety of places within your reconciliations to further enhance and streamline the process for both end-users and application administrators. Attributes, formats, profiles, and even specific transaction types (ex. on Subsystem Adjustments, but not on Source System Adjustments) can contain separate sets of rules.

Automation in ARCS 3

[Screenshot 11a: Rules can be added at a Format level.]

 

Automation in ARCS 4

[Screenshot 11b: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a Format level.]

 

Automation in ARCS 5

[Screenshot 11c: Rules can be added at a specific transaction type. In this screenshot, any rules created here would only affect Subsystem Adjustments and would not affect System Adjustments.]

 

Automation in ARCS 6

[Screenshot 11d: Different Rule resolutions will be available depending on where the rule is created. This screenshot, for example, shows the options for rules created at a specific transaction type level.]

Thus, rules can be used for anything from sweeping, application-wide changes down to differences at a transaction-by-transaction basis, as seen in Screenshots 11a – 11d. If ARCS is a suit, then rules are custom tailoring; they are made to fit your company’s specific needs.

The most common rule I see relates to Auto-Submission (as opposed to Auto Reconciliation). The out-of-the-box auto reconciliation methods previously discussed are set on Profiles and can be used to “close” a reconciliation for the period if the criteria is met. However, sometimes a reconciliation still needs reviewing such as if it is considered higher risk or only during certain periods in the fiscal year. Customized rules can dynamically determine which reconciliations can skip the preparer and be assigned directly to the reviewer, and which are clear to be automatically “closed” for the month (e.g. without approval by a preparer or reviewer). Tailoring rules in this manner still helps the preparers reduce their workload while giving management the confidence that the higher priority reconciliations are being reviewed – the best of both worlds!

No Mistakes with Modularity from “Day 1” to “Day 100”
So, there you have it: the four main manifestations of ARCS’ modularity. While nothing will replace proper planning, ARCS does not permanently punish any application decisions you (or your partner) have made in the past. The tool is able to grow with your company and accommodate your needs as they arise. There’s no reason to pick “today” or “tomorrow” – have them both.

Am I right? Am I off my rocker? You tell me! Answer in the comments below if ARCS’ (or ARM! We haven’t forgotten you…) has been able to accommodate the changes with your company’s growth.

If you like what you’ve read, please consider sharing this article through social media. And let me know in the comments what topic(s) you would like to see covered in future posts.

*Screenshots taken from the patch 1806 release.

Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up

We talked about adding new scope in New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons and modifying your application inside (i.e. changing reconciliation methods) and outside of ARCS (i.e. new data feeds) in Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning.

Today, we’re going to tear it down and rebuild from the ground up.

Let me start with this:  redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

ARCS From the Ground Up 1[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

ARCS From the Ground Up 2[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

ARCS From the Ground Up 3[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

ARCS From the Ground Up 4

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

ARCS From the Ground Up 5[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Join me in the last post of the ARCS modularity series – a real crowd pleaser: Automation in Account Reconciliation Cloud Service (ARCS): At Its Finest

*Screenshots taken from the patch 1806 release.

Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning

In the last post, New Scope in Account Reconciliation Cloud Service (ARCS): Add-Ons, we discussed how ARCS sets you up to easily add on additional scope to your existing application and scale your solution. However, not all changes are brand new. Clients are often concerned with being pigeonholed based on their “Day 1” decisions. A common question I am asked during a design session is “Can I manually enter this reconciliation today, but create new feeds to automatically load the data tomorrow?” The answer is a resounding YES, and it provides clear added value to the next phase of any ARCS (or ARM) project. It can be a viable project strategy to set up reconciliations using an Account Analysis format on “Day 1” and change to a Balance Comparison format when automated data loads are built on “Day 100.”

Modifications in ARCS 1

[Screenshot 5a: Reconciliation 100-1000 is setup with a Balance Comparison format in Sep 2017.*]

Modifications in ARCS 2

[Screenshot 5b: The previous period’s reconciliation can be viewed in the Prior Reconciliations tab.*]

Modifications in ARCS 3

[Screenshot 5c: Reconciliation 100-1000 was previously setup with an Account Analysis format in Aug 2017. The format of a profile can be changed while maintaining the Prior Reconciliations link.*]

Depending on how this change is made, it is even possible to keep the modified reconciliation “linked” to the previously completed reconciliations even though the Format has changed, such as in Screenshots 5a – 5c. The ease with which ARCS allows you to change Reconciliation Methods (via Formats) gives you the flexibility to not bite off more than you can chew in the beginning of a project.

Changing Reconciliation Methods is often related to new integrations. Moving from the manual “fat fingering” of data to directly loading general ledger and sub ledger balances through Financial Data Management Enterprise Edition (FDMEE) or Data Management combined with the inbuilt auto-reconciliation tools can bring a “quality of life” change for end users as well as added confidence in the data’s integrity. It is always a best practice to pull data from the source. Creating the integration from the general ledger is typically part of the initial scope. The usual candidates for building additional feeds after the first project phase are the sub ledgers related to fixed assets, accounts receivables, and accounts payables. However, the most “bang for your buck” as it relates to what integrations to build depends on your line of business and specific company requirements.*

*Note that adding multiple general ledger feeds introduces additional complexities beyond the scope of this article. Please consult with your Oracle partner before adding to your application.

In some cases, the greatest efficiencies to your existing reconciliation process are gained in utilizing the power of ARCS Transaction Matching. This module is better suited to handle massive data volumes at a transactional level. As an example, instead of performing just a reconciliation of the balance sheet’s intercompany balances in ARCS Reconciliation Compliance at the end of the month, an enhancement to this process could be to perform the daily matching process in ARCS Transaction Matching to clear up issues in real time as they arise. This simplifies the month end’s reconciliation. ARCS Transaction Matching is a powerful supplement to an existing reconciliation system and continues to receive special attention from Oracle as seen with the major release of new functionality in Patch 1805.

Just as there are many ways your company can change, ARCS can be modified to match your needs even in a live application. However, sometimes changes are more fundamental than a bit of tweaking such as in an acquisition or the introduction of a new, company-wide general ledger. Or, perhaps, you are just not satisfied with the solution design. Join me in the next post as we discuss the dangerous topic of redesign in ARCS – what is possible…and what it costs.

In the next post, Redesign in Account Reconciliation Cloud Service (ARCS): From the Ground Up, learn how redesign IS possible in ARCS.

*Screenshots taken from the patch 1806 release.

Data Governance in the Cloud: An Integrated Strategy; A Unified Solution

Are you tasked with making organizational decisions that have placed you in a major dilemma? As a decision-maker in today’s fast-paced economy, you must wonder how you can cut costs, improve the bottom line, and still maintain the data quality necessary to make strategic decisions.

Take heart because it IS possible to achieve a balance of on-premise and off-premise Enterprise Performance Management (EPM) software while maintaining integrity and control of your data to provide the quality and data assurance needed for success – AND benefit financially from new Cloud technologies.

Success is a combination of understanding what each data tract requires and creating an integration strategy consisting of the necessary business processes and software tools that deliver consistency and integrity of your EPM strategic data.

Past trends called for a tight on-premise coupling of all EPM software to achieve the best results. This strategy required maintenance of a large hardware and software infrastructure and related personnel to keep everything running smoothly.  The new Cloud “POD” subscriptions are geared toward reducing the high costs of infrastructure which is a financial benefit. As in all things in life, there is a consequence of moving to Cloud technology.   An unexpected consequence of Pod technology is the creation of isolated silos of information, but there is an easy resolution!  The key to overcoming this limitation is to gain an understanding of what each component offers and demands, and creating an integration strategy to bridge that gap.

If you are interested in learning how to create this strategy to bring the various pieces together as a unified solution or if your organization plans to migrate to the EPM Cloud platform in the future, this whitepaper helps to define a process to pre-build the integration strategy and make moving to the Cloud easier with reduced time to migrate.

Download our whitepaper: Data Relationship Management (DRM) for Cloud-Based Technologies:  Using DRM for Data Governance in the Cloud