Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Queries

Welcome back to the Profitability and Cost Management out-of-the-box features series!

Here, you’ll gain insight to fully leverage the features bundled with an Enterprise Cloud Subscription which includes Profitability and Cost Management. The focus of this post is: PCM queries – artifacts that represent or extract data in an easily consumable format.

At the end of this blog post, the below topics should be familiar to the reader:

  1. Define PCM Queries
  2. Queries Use Cases
  3. How to Launch Queries in PCM
  4. Query Options
  5. Data Extract Format
  6. Common Errors and Warnings
  7. Alternate Uses of PCM Queries

*The contents of this blog post are based on the standard Bikes (BkML30) application. Deploying the PCM Demo Bikes application can be achieved via the PCM landing page — “Creating a Sample application button” (from version 19.06 onwards).

1. What is a PCM Query? 

PCM Queries are predefined statements with execution mechanics like Smart View retrievals.

Queries can be launched in one of three ways:

  1. Via the PCM Graphical User Interface (GUI)
  2. Automatically through EPM Automate/REST API commands
  3. Within Dashboards and Intelligence analysis reports (covered in greater detail in a this previous post).

2. Queries Use Cases 

Queries are versatile artifacts that have a list of use cases limited only by the user’s imagination. The most common use cases are:

  1. Data Validation – leveraged both for input as well as post-allocated results. Queries can be created and stored in a PCMCS instance. Their definition is similar to a Smart View query, with Columns, Rows, and Point of View (POV) selections. More details are found in the queries options section. PCM Queries have drill-through capability – applicable only to base level queries, leveraging the Cloud Data Management functionality.
  2. Driver/Adjustment Data Entry Template – while queries do not rise to the capabilities of a PBCS/EPBCS web data entry form, they manage to solve the issue of “directional intersections” in an elegant manner. By defining the base level intersections where driver data should reside and storing that query definition, users avoid the need for offline sheets for data entry.
  3. Refined Data Clear Selection – queries can be leveraged to trigger narrow or specific data clears aimed at replacing partial data sets. During a clear POV action, users can select a predefined query to restrict the clear scope. This feature optimizes data loads enabling users to restrict the replacement of input data to only those intersections that are required to be replaced. Think of it as a predefined FIX statement or a predefined tuple.
  4. Simplified Task Lists  – An example of this capability is explained in detail within the “Alternative uses” section of this blog.
  5. Journal Entry or Data Warehouse export – formatted data exports that can be leveraged as Journals within a GL submission process, in .csv extracts, without any custom formatting functionality like header, footer, record count, date/time stamp, etc.

3. How to Launch PCM Queries 

There are several Graphical User Interfaces (GUI) as well as automation options to launch queries.

1.  Intelligence Menu Section – clicking the query name opens a Smart View connection within an Excel session, prompting users to enter their Cloud credentials. If the Excel session is not terminated, credentials will persist for all subsequent query launches.
Blog Post.Alec Intelligence Menu Section Picture

From the Actions button, users can also launch a direct .csv export of each query. The exported file will be placed within the File Explorer section and is available for download. Users can define the number of decimals they choose to extract – up to a maximum of 7 – and whether or not they choose to perform a base-level export or an aggregated data export.

2.  Manage Queries Section – this menu includes all the capabilities found within the Intelligence menu section along with the ability to edit, delete, or create new queries.

3.  EPM Automate Command – if the desire is to launch a query and generate a .csv file in an automated manner, the requirement can be achieved by executing the following command: epmautomate exportqueryresults APPLICATION_NAME fileName = FILE_NAME [queryName = QUERY_NAME] [exportOnlyLevel0Flg=true]The .csv file generate is very similar to a data warehouse extract file with all dimensions displayed in columns and separated by a space delimiter.

By omitting the queryName parameter, the automation will execute a full base-level data extract of the PCM application in the native ASO format. (non-columnar, optimized for native ASO data load).

Alternatively, if the query must be used in a targeted data clear, the request can be launched via automation:  epmautomate clearpov APPLICATION_NAME POV_NAME [QUERY_NAME] PARAMETER = VALUEstringDelimiter = “DELIMITER”

Example:

epmautomate clearpov BksML 2019_Jan_Actual queryName=BksML_2019_Jan_clear_query isManageRule=false is InputData=false isAllocatedValuses=fasle is AjustmentValues=false stringDelimiter

When uisng targeted data clears, no other parameters can be enabled, such as isManagerRule, isInputData, isAllocatedValue, or isAdjustmentValues.

4.  Rest API Command – just like EPM Automate, REST API is used for automation (lights-out processing). EPM Automate leverages REST API in the background. The difference between REST API and EPM Automate is not the scope of this post; however, one of the main differences between the two is the enhanced logging level available with REST API, which is why implementation partners may favor REST vs EPM Automate.

https://<SERVICE_NAME>-<TENANT_NAME>.<SERVICE_TYPE>.<dcX>.oraclecloud.com/epm/rest/v1/applications/Ex3F3/jobs/exportQueryResultsJob

{“queryName”: “Proftiability – Product”,”fileName”: “ProfitabilityProduct2019.txt”,”exportOnlyLevel0Flg”:”true”}

The syntax for a targeted data clear is the following:

https://<SERVICE_NAME>-<TENANT_NAME>.<SERVICE_TYPE>.<dcX>.oraclecloud.com/epm/rest/{api_version}/applications/{application}/povs/{povGroupMember}/jobs/clearPOVJob

{“isInputData”:”true”,”queryName”:”myQueryName”,”stringDelimiter”:”_”}

4. Query Options 

PCM has a few displays and data extract options that can be stored with the query, and more that can be selected during run time via the GUI or through automation scripts. The settings can be separated into two categories:

1. Optional Query-Store Settings

Option 1: Use Aliases: If not deselected, the member name will be used instead.

Option 2: Suppress Missing data during execution. If not selected,

Option 3: Include Attribute Dimensions

Option 4: Order of columns (ignored during granularity override selected at run-time)

2. Mandatory Query-Stored Settings

Option 5: Column/Row Selection

Each dimension reference must indicate whether it is to be used in the Row, Column, or Point of View (POV). It is possible to save queries with no POV reference, the only mandatory selections being those of Columns and Rows.

Any dimension member selection marked as POV will be displayed either in the POV menu/floating box within Smart View or as the header record content if the POV box is disabled in Smart View.

Blog Post.Alec Mandatory Query-Stored Settings

This is a screenshot with the alternative of the POV box disabled.  Users will be able to see a representation of all dimensions that were a part of the toggle POV box when the POV was enabled.

Blog Post.Alec Mandatory Query-Stored Settings2

The selection of Row, Column, and POV will be bypassed during data extracts, whether launched through the menu or via the EPM Automate or REST API commands. All data extracts will list out the members referenced in the selection for each dimension followed by a single data column.

3. Optional GUI Run – time Settings:

Option 6: Export only level-0 data. This will force the query to produce base-level data intersections for all members where a base level has not already been selected in the query. Depending on the size and granularity of data, the query can take anything between 30 seconds up to several hours. Create multiple queries to support the larger data extracts or define the right level of granularity required for the target system to avoid slow extracts or even failures when exceeding the 5-mil records limit.

Option 7:  Rounding precision – extends to a maximum of 7 decimals.

Blog Post.Alec Optional GUI Run-Time Settings.png

Generating data with an increased number of decimals should be paired up with the Application setup of decimals detail as there is no point in generating a data extract with 5 decimals when the application is configured to only support up to 2. This configuration option is available in the Application menu and can be revisited and updated at any point in time.

If neither of these two optional GUI run-time settings are selected, the report will pull the level of granularity established within the query, whether setup at base level or at aggregated level intersections.

4. Optional Automation Settings: 

Option 8: Changing the precision of data extracts when launching queries via EPM Automate or REST API can be achieved via the parameter roundingPrecision with values ranging from (-6) to 7. By default, the EPM Automate exportqueryresults will extract data values with 2 decimal characters. Consider whether or not extracting data with multiple decimals is required, especially if the application Allocation Precision parameter has not been set to higher than the standard value of 2 decimals.

Blog Post.Alec Option8

5. Data Extract Format 

The Smart View query extract format will stay constant regardless of the choice of menu where it is launched.  The .csv file format; however, has a few variances depending on the options selected either during build or during run time.

The .csv format file generated will lose references to POV/Column/Row. As mentioned previously, the resulting file will look like a data warehouse extract – very similar to what can be achieved via an export script within a Planning Cloud Business Rule or Essbase export calc script.

If end users choose to perform a base-level extract override during run time or through Automation commands, the .csv extract will lose the predefined order of the dimensions setup in the query definition.

Regardless of the level of the data extracted (upper or base level), all members within a .csv file extract will be enclosed in double quotes. The delimiter will be “tab” and cannot be overridden or replaced from within the PCM GUI.

This is an example of the query “Profitability – MultiDimension” that is available with the Demo Bikes model. The query extract was launched via the GUI with 7 decimals and with no granularity override (no base level extract option selected at run-time):

Blog Post.Alec Data Extract Format

This is an example of the same query “Profitability – MultiDimension”– launched with 7 decimals and base-level members selection/override at run-time:

Blog Post.Alec Data Extract Format2

When comparing the above screenshots, the order of the columns was clearly altered. This is due to the base-level override selected during run time, and it is an important detail in case the .csv file must be used as a data feed to an external system.

6. Common Warnings and Errors 

Although this section does not represent an exhaustive list of errors, it covers the most common query-related issues a user may encounter along with the corresponding solution.

Warning message: “Query has invalid members. Save the Query to permanently remove the invalid references. To validate the entire Point of View, go to Model Validation without saving.”

Blog Post.Alec Common Warning and Errors

This is a generic message that can indicate either a warning or a true error.

Potential causes for this warning message: if row selections represent top-of-the-house (or so called Generation 0) members; in other words, the Dimension name, while queries may run and produce results, the warning will pop up every time the query is launched via the GUI.

In order to fix this warning, the row selection must be made on any other member or subset of members that is not referencing the Generation 0 / top-of-the-house member.

Second cause for the same warning message: a true error resulting from a member referenced in a query that has either been renamed or removed from the application. In this case, the query is pointing to a Generation 0 / top-of-the-house member, but that selection was not intentional. In most cases when this warning occurs, the obsolete member name reference was automatically removed and either replaced with the top level of the corresponding dimension or simply left blank:

Blog Post.Alec Second Cause for the same warning message

If the user wants to validate which reference was removed due to a metadata update, there is an option to run Model Validation* reports on queries to find out more details:

Blog Post Alec Model Validation 1

*More details on the Model Validation tabs and options will be covered in a future blog post.

In this example, the member STAT1201 has been renamed as STAT120. Because the query references STAT1201, the user is prompted to renew the Account reference selection within the query.

CAUTION: if the user receives this warning and saves the query before launching the Model Validation report, the previous member selection reference (which is now obsolete) is removed and replaced with the Dimension top member. This means that short of restoring the query from a prior snapshot, the user will no longer have a prior reference of the member that has been removed.

The number of query result cells exceeds the limit set by the QUERYRESULTLIMIT.

Blog Post.Alec Errors

Reference the advice given in Option6 (Optional GUI Run time settings section) to reduce the size of your query. This query limit cannot be manually updated by end users or administrators of the Profitability application.

7. Alternate Uses of PCM Queries 

One of the long-awaited features in PCM is the ability to create forms, menus, and task lists like those within Planning and Budgeting Cloud applications. In the absence of such features (which should be coming in future updates), queries can become an easy-to-use alternative. In the prior sections, we explored how queries can be leveraged as data entry guidance mechanisms like forms, data extract tool, and narrow-scope data removal tool. The one feature not discussed yet is the Task List alternative.

By creating predefined queries and listing them in a specific sequence, administrators can dictate the order of operations for either the setup/data load/pre-allocated values or the validation of a PCM model after the allocation process was completed. The listing below is very similar to the concept of Task Lists in PBCS/ EPBCS, and while PCM administrators cannot customize this list by user ID, it still offers that step-by-step guidance that an end user may find useful.

Blog Post.Alec Alternate Usues of PCM Queries

A few tips to keep in mind when leveraging queries in a Task list format:

  1. There are a limited number of characters for each query name. The limit is clearly enforced when editing the name of an existing query. Users can go beyond the limit of number of characters when building a new query from scratch, but this results in an ADF interface error message, and it is most likely a bug which will be addressed in future releases.
  2. The order of the queries is based on the name – descending or ascending. There is no option to customize query order (similar to up/down arrows that allow us to move tasks in Planning or PBCS). This restriction forces the naming convention to be similar to the above example.
  3. There is no option to restrict access at the query level. There are security restrictions/data grants that can be set up for each user to restrict access to the data within the PCM app, but there is no restriction to disable or not display a query or list of queries.

8. Conclusion on PCMCS Queries 

PCM Queries offer a wide array of functionalities that enable users to interact with PCM data throughout the entire processing cycle.

Queries can be used as Data Entry Form to define data entry templates for drivers, or for adjustments when launched via Smart View. By storing the intersections where data is expected to be entered in a query, end users don’t have to worry that they are sending values to the incorrect intersection. Once the query is saved, it can be leveraged multiple times. The references via formulas or hierarchy relationships (Parent, Descendant, Level 0, etc.) will dynamically build the latest metadata selection with each query launch, eliminating the risk of not submitting a driver value because an intersection was not displayed in a Smart View selection.

Queries can also be leveraged as data export mechanism for target systems. The .csv format extract at base-level or upper-level, in column format, can be consumed by most, if not all ETL tools.  There are certain restrictions with queries for data export such as number of records that can be extracted at one time as well as considerations with using dynamic member references. In such cases, the ASO Essbase reporting leading practices must be dusted off and put to good use. What is common sense in the ASO Essbase world should be a good benchmark in the PCM world as well.

Predefined queries are also a good use of resources when troubleshooting allocation results or analyzing data. While PCM queries do not have prebuilt intelligence capability to call out differences month-on-month – a feature that is present in Account Reconciliation Cloud Solution – they can support validation and troubleshooting efforts after the PCMCS allocation process is completed.

The “Out-of-the-Box” series is slowly closing the list of items available in the PCM Intelligence screens, but we are not done yet with all that PCM has to offer!

Keep a close eye on this space for future posts on Cloud Data Management, Model Validation, and Backup and Restore features within PCMCS.

For comments, questions or suggestions for future topics, please reach out to us at infosolutions@alithya.comSubscribe to receive notifications about new posts about Cloud updates and other Oracle Cloud Services such as Planning and Budgeting, Financial Consolidation, Account Reconciliation, and Enterprise Data Management.  Follow Alithya on social media for the latest information about EPM, ERP, and Analytics solutions to meet your business needs.

Twitter  |  Linkedin  |  Facebook  |  YouTube

Out-of-the-Box Features: Profitability and Cost Management Cloud Service (PCMCS) – Intelligence and Dashboarding: Traceability

Traceability is the buzz word in any regulated industry. Being able to prove the numbers is crucial to all businesses, but it can be very time consuming and complex for companies that operate across multiple and diverse lines of business with a large pool of Channels, Services, Customers or Products. Shared Services implementations require a clear understanding of the flow of costs.

Where is this cost coming from?

Why have I been charged so much more this month for the same service compared to last month ?

These questions should be easy to answer. Unfortunately, not all profitability analysis technologies are able to support a quick turnaround for providing the required level of detail.

PCMCS has more than one option to easily provide much-needed answers.

The Rule Balancing report is one of numerous out-of-the-box (OOTB) features included with an Oracle Cloud Service subscription able to support data traceability and transparency. For more details about the type of information the report provides and to learn the ease with which it can be set up for your application, review this comprehensive blog post.

Besides Rule Balancing reports, PCMCS OOTB features support transparency within allocations and/or profitability models with Traceability maps.

The focus of the current post is how to access, build, and use Traceability maps.

The order in which I am covering the PCMCS OOTB features is directly related to the Intelligence menu options available in PCMCS.  As a recap, the 6 menu options are listed below:

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 1  1.  Analysis Views (How to create them, customize them and use them here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 2  2.  Scatter Analysis (Setup and configuration covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 3  3.  Profit Curves (Usage and features covered here)

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 4  4.  Traceability

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 5  5.  Queries

Alex Mlynarzek - Analysis Views and Scatter Analysis - 2-28-19 - Image 6  6.  Key Performance Indicators

The contents of this blog are based on the standard Bikes (BkML30) demo application, so you can follow the step-by-step details without having to go through an app setup from scratch. You can load and deploy this application directly from your PCMCS Instance through a couple of clicks via the Application menu using the + / Create button.

Traceability – Intro

The traceability maps, whether in PCMCS or in on-premise HPCM, allow users to graphically visualize the allocation flow. A chosen business segment can be traced through the allocation steps, either backwards or forwards, starting from a predefined point. Images  make up the map of a data point either flowing into the selection of members chosen by an end user to troubleshoot or flowing out of that selection into subsequent allocation steps.

Alex Mlynarzek - Traceability - 5-21-19 - Image 1

Alex Mlynarzek - Traceability - 5-21-19 - Image 2

Traceability is a great tool for troubleshooting specific intersections of detailed data such as base level accounts against a specific department. However, when there is a need to identify patterns or troubleshoot allocation results at a higher level, the Standard Profitability (the first on-premise version of the Profitability module) Traceability maps are not geared to handle such requests. In order to perform a high-level analysis in Standard Profitability models, users would have to revert to Smart View or Financial Reports.

Being able to trace data at a summarized level of detail is the key difference between traceability in Management Ledger applications and traceability in Standard Profitability. Management Ledger allows end users to select the level within the hierarchy where they desire to launch or generate traceability, whether base level or otherwise.

Traceability – Setup

The starting point of any traceability map in Management Ledger is Model Views.  If you are interested in learning how to build and use Model Views, spend a few minutes reviewing this prior post.

List of steps necessary to launch a traceability report in Management Ledger applications:

  1. Select a valid Point Of View (POV). The POV must contain data in order to display any traceability results.
  2. Choose a prebuilt Model View – example: IT Support Activities.
  3. Select a tracing dimension which will represent the detail that is the focus of your analysis (Accounts, Departments, Entities, Business Units, Segments, etc). The selected tracing dimension determines the focus or scope of your analysis and will be the one dimension that is displayed at base level detail or any other generation within the hierarchy.
  4. Trace Forward and Use Generation Selection boxes are selected by default.  Not selecting “Trace Forward” allows users to perform a “Trace Backward” action; in other words, figure out how the model arrived at a data value for a selected intersection, rather than how a data value was allocated out from that intersection to other recipienAlex Mlynarzek - Traceability - 5-21-19 - Image 3

A report with the “Use Generation Selection” filter disabled will display the data at the base level for the Trace Dimension (in this example, Entity).

Note: If a message is received indicating the Flash Player version is not up-to-date, check that pop-ups are enabled on the page to allow the download of the required update.

Alex Mlynarzek - Traceability - 5-21-19 - Image 4

Alex Mlynarzek - Traceability - 5-21-19 - Image 5

If the traceability report does not generate any results, check that the allocation rules were successfully completed for the referenced POV. Alternatively, if the POV calculated is successful, but data is not displaying on the Trace Screen, check that the application variables are correctly setup for Current Year, Period, and Scenario. Also ensure the Account dimension maps are specified in the Dimension Settings screen.

Traceability – Display Options and Filters

Traceability screens have 5 display options:

  1. Vertical (Top Down)
  2. Horizontal (Left to Right)
  3. Tree
  4. Radial
  5. Circle

Within the traceability analysis, users can focus on a single rule. The tracing dimension in the previous example is Entity. The tracing dimension is the focus of the traceability reports – following how data was allocated into or out of a base level Entity.

Alex Mlynarzek - Traceability - 5-21-19 - Image 6

To isolate a specific rule and separate it in a standalone diagram, click Shift+Enter or select the graphical option on the top of the Rule ID box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 7

End users have the choice of displaying the aliases/descriptions of the Entities rather than the code member names. If aliases have not been uploaded in the metadata of the application, then the report will still reference the member name codes, regardless of this choice.

The following traceability report will display how operating expenses are reallocated /redistributed from each support entity (like IT, Facilities, IT, etc.) to the production entities using predefined driver configurations referenced in the Rule box.

Alex Mlynarzek - Traceability - 5-21-19 - Image 8

Select the “Trace Forward” filter and keep constant all other prior selections in the initial traceability screen to display IT Support Activity charge out.

Alex Mlynarzek - Traceability - 5-21-19 - Image 9

The “forward tracing” of IT allocations represents how data is allocated out to consuming departments such as Finance, Marketing, Outside Sales, Assembly, etc.  Remember the focus of the trace screen depends on the “Tracing dimension” selected. In this example, Entity was the tracing dimension.

The top box, R0009, shows us the Rule Name relevant for IT allocations, the ruleset reference, the Driver used to allocate data to Targets – in this case : Desktop Laptop Users, regardless of Activity performed (NoActivity reference) as well as the amount / dollar value of the allocation : Allocation Out 1.338.000.

Users have the flexibility to allocate data partially (to allocate only a % of the total value instead of 100%). That is what the Contribution % reference in the R0009 box represents. In this rule, the administrator/rule designer decided to fully allocate the IT cost to the consuming department instead of allocating it partially. Therefore, the 100% reference is displayed.

In the case of the Bikes ML (Management Ledger) application, the Entity dimension has 4 generations. When talking about generations, the larger number, in this case number 4, represents the lowest level of detail. Generation 0 represents the Dimension name; Generation 1 represents the first set of children; Generation 2 represents the Children of Children, etc.

Below is a radial display of the contribution charge out at base Entity level when no generation selection was made prior to launching the traceability report:

Alex Mlynarzek - Traceability - 5-21-19 - Image 10

We can see in this diagram how much each Target Department was charged for their IT bill.  The contribution from the IT department to each target is displayed as a %.

Change the generation reference from 4 to 3. The higher the number of the generation, the more summarized the detail. The change of Generation reference will result in a summarization of the members of the Entity dimension to one level higher than seen previously.

Alex Mlynarzek - Traceability - 5-21-19 - Image 11

Notice how there is no longer an Entity breakdown at base level as we had in the previous screen when Generation 4 was selected, and the contribution percentages have been summarized to display the contribution % at a node level.

In situations where a dimension has many levels within the hierarchies or an increased volume of base level members, the generation selection proves useful as it allows users to group data sets and display them in the same diagram without compromising the level of detail.

Traceability – Customization

As mentioned at the beginning of this post, PCMCS comes with several features to support traceability and troubleshooting, one of these features being the Rule Balancing report. In situations where the traceability maps are insufficient to support a meaningful conversation regarding bill out values, and a deeper dive into an individual rule is necessary, the Rule Balancing report covers such a request.

While the traceability report has evolved in comparison to the Standard Profitability model, its usage is limited to situations where there is a need to troubleshoot specific data points while also having a visual representation as support.

The most common alternative to graphical traceability reports are ad hoc reports in Smart View, either built from scratch or launched via the Rule Balancing report (described in detail in a previous post).

Conclusion on OOTB features: Traceability

Business segment profitability analysis represents the analysis of operations and profitability of individual segments (e.g. Lines of Business, Products, Channels, Customers, Services) within a company. Business segment reporting requires all costs to be divided into one of the two categories:  direct /traceable costs or indirect/nontraceable costs.

In PCMCS, all costs are transparent and fully traceable. An indirect cost value can easily be traced throughout the flow of the allocation model all the way down to the business segment being analyzed. The indirect allocated volume can be explained through step-by-step analysis, high level traceability maps, and OOTB reports listing out the rules impacting the distribution of such cost.

Using a combination of Model Views, Rule Balancing reports combined with Traceability analysis and Smart View ad hoc retrievals, there should be no doubt regarding the source of a data value within PCMCS. Metric data validation – situations where the intersections for each metric are customized to such extent that building a Rule Balancing report or an individual Model View is not efficient nor effective – is mostly performed via Smart View.

In a nutshell, traceability provides significant benefits:

  • users can trace both revenue and cost based on predefined model views.
  • traceability can flow forward or backward from a starting point.
  • users can review the final contribution % (driver details are not displayed on this screen).
  • users can toggle between different display options and focus on specific rules for focused analysis.

Subscribe to our mailing list to receive updates for new blog posts related to PCMCS Queries, KPIs, Model Validation, System Reports, Data Integration using Cloud Data Management, as well as the OOTB Application Backup and Restore functionality.

Is there a PCMCS-related topic that you would like to see covered in more depth?  Email us at infoSolutions@alithya.com.

Using Data Visualization and Usability to enhance end user reporting – Part 4: Tying it all together

Now that the foundations have been set in my last three posts, in this final post I’ll share how we can create reports, leveraging:

• Standard definitions and metrics
• The understanding of how users  will consume data and interact with the system

To effectively create reports, make sure to follow these key best practices:

1. Reduce the data presented by focusing on the important information. For example, rather than showing two lines for revenue actuals and revenue budget, try showing one for the difference. Users can identify trends much more quickly when there are fewer objects to focus on.

2. Concentrate on important data and consolidate it into chunks. If you have two charts, use the same color for revenue on both of them. This makes it easier to interpret and see trends between them

3. Remove non-data items, especially the images, unnecessary lines and graphics. This helps the user focus on the actual data, so they can see trends and information rather than clutter.

Here is an example of two reports with the same data. The first provides a table with various colors, bold fonts and line. The second report highlights the important areas/regions. Your eyes are immediately drawn to those areas needing attention. Table two allows the user to draw accurate conclusions more effectively and in a much shorter timeframe.

These are some general practices which can be applied in most cases and will give users a much more positive experience with your reporting system. If you need help making sense of your reporting requirements, creating a coherent reporting strategy or implementing enterprise reporting, please contact us at info@ranzal.com.

Using Data Visualization and Usability to Enhance End User Reporting – Part 3: The Balance between Data and Visual Appeal

In part three of my blog series, I’ll provide an overview of the important balance between data and visual appeal when creating reports, including some of the latest research and findings.

Many users believe that once you have the metrics in place and understand what data users want, the next step is to create the reports.

In reality, a lot of thought and a careful eye are required when making design considerations to create charts, grids and tables that convey the details in the simplest terms for user understanding. The right design choices enable users to see easily the trend, outliers, or items needing attention.

Many people think that the more data they can cram in, the better. However, studies have shown that the average person can only store 6 chunks of information at a time.  Depending on how flashy and distracting your graphics and marketing logos are, you may have already used up half of your brain’s capacity, without getting to any reports or dashboards.

Graphic overload may make one consider removing all distracting graphics, highlights, bolds and visual clutter to show the data – novel concept right?

But this is not the solution. There has been lots of visualization studies and research done over the past century that have uncovered that eliminating graphics altogether is not the solution to this dilemma.

In fact, there are several leading experts on this topic, including three key people, who are leading the charge against clutter and visual distraction, cheering for more measured and thoughtful chart and dashboard visual design. These individuals are:

·         Edward R. Tufte

·         Colin Ware

·         Stephen Few

All three have published several books explaining how we interpret visual data, including what makes our eyes drawn to color and form, and what aids understanding. It also explains “chart junk” – a term first coined by Tufte in 1983. Tufte defines “chart junk” as simply:

Conventional graphic paraphernalia routinely added to every display that passes by: over-busy grid lines and excess ticks, redundant representations of the simplest data, the debris of computer plotting, and many of the devices generating design variation.”

The key concept of “chart junk” leads into another of Tufte’s mantras called the “Data Ink” ratio. The idea here is that by minimizing the non-data ink you are maximizing the data ink.  In other words,  that you can achieve the ideal balance of data and design by removing borders, underlines, shading and other ink elements which don’t convey any messages

There are a lot of available resources out there on this topic by these authors and others.

Stay tuned for my final blog post, in which I will demonstrate how to effectively put these concepts  into practice when creating reports.

Using Data Visualization and Usability to Enhance End User Reporting – Part 2: Usability

In this second part of my blog series, I’ll be looking at usability and what it really means for report design.

Usability takes a step back and looks at the interactions users have with reports. This includes how users actually use the reports, what they do next, and where they go. If users refer to another report to compare values or look at trends, they should think about condensing these reports into a single report or even create a dashboard report with key metrics. This way, users have a clear vision of what they need or what Oracle calls “actionable insight”. From there, users can provide other users with guided navigation paths based on where they actually go today.

With improved usability, users can review an initial report and easily pull up additional reports, possibly from a different system or by logging into the general ledger/order entry system to find the detail behind the values/volumes. With careful design, this functionality can be built into reporting and planning applications, to provide a single interface and simplify the user interactions.

Here is a real world example of how improved usability can benefit users on a daily basis: Often a user will open a web browser and an item is highlighted as a clickable link. Normally if you click on the link, it will open up in the same window, causing you to lose the original site that you visited. By clicking the back button, you can also lose the first site that you visited. With improved usability, clicking on a link would result in a new pop-up window, so when finished users are able to choose which windows to close and return to the original window.

The challenge with achieving improved usability, is that many organizations lack visibility into how users actually use reports, especially with users spread all over the world. One possible solution is for organizations to ask users about their daily activities. The issue here is that often users are uncomfortable discussing what they do and where they go online. Companies can overcome this challenge by enforcing sessions where they can ask leading questions including why users feel uncomfortable sharing their daily activities. These types of sessions can help organizations uncover the root causes/issues, giving them the insight to delve deeper to understand what lies behind the report request.

One common scenario where you could apply this approach is when users ask for a full P&L for their business units, so they can compare and ring anyone over budget.  By having a session to understand the users’ specific needs/daily activities, organizations can instead produce a dashboard that highlights the discrepancies by region. With this dashboard, there is no need to compare and analyze; users can open the dashboard and see the indicators with a click of a button. Users can drill down for more information while placing that call!

In conclusion, improved usability means helping users get to the answer quicker, without having to do a lot of unnecessary steps. The old adage is true – KISS – Keep It Simple Stupid!

Using Data Visualization and Usability to Enhance End User Reporting – Part 1: Introduction

Throughout my experience on client engagements, I’ve encountered a common issue: reports. In Part 1 of my blog I will address the reporting challenge, highlight some key benefits of standardized reporting, and outline an approach for implementing a standard enterprise reporting system.

On some engagements, clients want to reproduce the same old reports they had in the previous system, assuming that if users were complacent before, they would be happy with the same reports going forward. This is not the case. On a recent project, the client was in the midst of replacing a 10 year old system. Several business users were not happy at the prospect of continuing with reports that were created a decade ago!   Other clients think that because the finance team has been getting the same reports for the past few years, that’s all they need.  This is another incorrect assumption. In many cases these reports are not being utilized and users may even be using Excel to manipulate and turn the reports into something more useful.  So why would you give users the same old reports, when, with a bit of foresight and planning, you can give them reports which enhance the way they do business and actually make it easier for them?

Some of the key benefits of an enterprise reporting system are:

  • Single version of the truth – everyone has the same revenue /cogs/opex numbers
  • Analysts have more time to analyze data and trends rather than consolidate data to make reports

And how do you enhance reporting and deliver added value to users?

To provide users with the necessary reports, it actually takes a multi-disciplined approach, focusing on usability and data visualization.  This assumes you have created the back end databases with appropriate structures to support your reporting needs.   It’s also critical to have a single definition for accounts and key metrics. This makes a big difference in reporting and getting everyone aligned to the single version of the truth you are about to create.

In Part 2 of my blog, I’ll look at usability and what it actually means for report design.