Laser Tag for Cloud Analytics

A friendly game of laser tag between out-of-shape technology consultants became a small gold mine of analytics simply by combining the power of Essbase and the built-in data visualization features of Oracle Analytics Cloud (OAC)! As a “team building activity,” a group of Edgewater Ranzal consultants recently decided to play a thrilling children’s game of laser tag one evening.  At the finale of the four-game match, we were each handed a score card with individual match results and other details such as who we hit, who hit us, where we got hit, and hit percentage based on shots taken.  Winners gained immediate bragging rights, but for the losers, it served as proof that age really isn’t just a number (my lungs, my poor collapsing lungs).  BUT…we quickly decided that it would be fun to import this data into OAC to gain further insight about what just happened.

Analyzing Results in Essbase

Using Smart View, a comprehensive tool for accessing and integrating EPM and BI content from Microsoft Office products, we sent the data straight to Essbase (included in the OAC platform) from Excel, where we could then apply the power of Essbase to slice the data by dimensions and add calculated metrics. The dimensions selected were:

  • Metrics (e.g. score, hit %)
  • Game (e.g.Game 1, Game 2, Total),
  • Player
  • Player Hit
  • Target (e.g. front, back, shoulder)
  • Bonus (e.g. double points, rapid fire)

With Essbase’s rollup capability, dimensions can be sliced by any one item or at a “Total” level. For example, the Player dimension’s structure looks like this:

  • Players
    • Red Team
      • Red Team Player 1
      • Red Team Player 2
    • Blue Team
      • Blue Team Player 1
      • Blue Team Player 2

This provides instant score results by player, by “Total” team, or by everybody. Combined with another dimension like Player Hit, it’s easy to examine details like number of times an individual player hit another player or another team in total. You can drill in to Red Team Player 1 shot Blue Team or Red Team Player 1 shot Blue Team Player 1 to see how many times a player shot an individual player. A simple Smart View retrieval along the Player dimension shows scores by player and team, but the data is a little raw. On a simple data set such as this, it’s easy to pick out details, but with OAC, there is another way!

Laser Tag 1

Even More Insight with Oracle Analytics Cloud (OAC)

Using the data visualization features of OAC, it’s easy to build queries against the OAC Essbase cube to gain interesting insight into this friendly folly and, more importantly, answer the questions everybody had: what was the rate of friendly fire and who shot who? Building an initial pivot chart by simply dragging and dropping Essbase dimensions onto the canvas including the game number, player, score, and coloring by our Essbase metric “Bad Hits” (a calculated metric built in Essbase to show when a player hit a teammate), we discovered who had poor aim…

Laser Tag 2

Dan from the Blue team immediately stands out as does Kevin and Wayne from the Red team!  This points us in the right direction, but we can easily toggle to another visualization that might offer even more insight into what went on. Using a couple of sunburst type data visualizations, we can quickly tie who was shooting and who was getting hit – filtered by the same team and then weight by the score (and also color code it by team color).

Laser Tag 3

It appears that Wayne and Kevin from the Red Team are pretty good at hitting teammates, but it is also now easy to conclude that Wayne really has it out for Kevin while Kevin is an equal opportunity shoot-you-in-the-back kind of teammate!

Reimagining the data as a scatter plot gives us a better look at the value of a player in relation to friendly fire. By dragging the “Score” Essbase metric into the size field of the chart, correlations are discovered between friendly fire and hits to the other team.  While Wayne might have had the highest number of friendly fire incidents, he also had the second highest score for the Red team.  The data shows visually that Kevin had quite a few friendly fire incidents, but he didn’t score as much (it also shows results that allow one to infer that Seema was probably hiding in a corner throughout the entire game, but that’s a different blog post).

Laser Tag 4

What Can You Imagine with the Data Driving Your Business?

By combining the power of Essbase with the drag-and-drop analytic capabilities of Oracle Analytics Cloud, discovering trends and gaining insight is very easy and intuitive. Even in a simple and fun game of laser tag, results and trends are found that aren’t immediately obvious in Excel alone.  Imagine what it can do with the data that is driving your business!

With Oracle giving credits for a 30-day trial, getting started today with OAC is easy. Contact us for help!

The True Power of Oracle’s Enterprise Planning Suite Unleashed at POET: A Case Study

Enterprise Planning and Budgeting Cloud Service“If you are going to change the world, you need a system to help get you there. For us, it was… about strategic opportunities. POET was at a crossroads. We needed a system that we could grow with and that could grow with us.”
Lezlee Herdina, Director of FP&A, POET

A privately held corporation headquartered Sioux Falls, South Dakota, POET LLC is a U.S. biofuel company that specializes in the creation of bioethanol. The 1,900-employee company produces 1.8 billion gallons of ethanol annually and has been granted 90 patents in the U.S. and abroad.

In this webinar, Edgewater Ranzal’s Managing Director and HSF Practice Director Ryan Meester speaks with Lezlee Herdina, POET’s Director of FP&A, to give us a behind-the-scenes look into POET’s Enterprise Planning Solution journey, from realizing that significant change was needed to an extensive evaluation process to the ultimate solution and, finally, to the company’s enduring vision going forward.

The Right Tool for the Right Job (RTRJ)

While Lezlee and the POET team were open to the insights and recommendations generated by their Ranzal analysts, they also had some specific goals in mind from the outset:

  • Provide seamless integration of financial and operational data
  • Create a platform for process improvement, including implementing greater automation in monthly processes improving efficiency and increasing time for value-add analysis
  • Achieve better communication, including ease of reporting
  • Reduce reliance on Excel models and associated version control issues
  • Improve data governance, with clarity of data model with common definitions to facilitate planning and reporting processes
  • Integrate operational and financial dashboards for performance measurement
  • Use of scenario analysis to drive M&A and strategic business decisions
  • Increase emphasis on cash perspective
  • Understanding when to use SmartView, Financial Reports, and Oracle Business Intelligence Enterprise Edition (OBIEE), as well as take full advantage of the strengths of each of the Reporting Tools
  • Keep it simple, and trust in the higher level nature of HSF
  • Recognize the important nature of the user experience
  • Ensure that data integrations are seamless for the end-user

To learn more about the POET team’s initial ongoing business challenges, the lessons learned, and the ultimate results, view a recording of True Power of Oracle’s Enterprise Planning Suite Unleashed at POETwebinar.

To learn more about our Enterprise Planning solutions, visit www.ranz.al/epbcs-webinars

Missed the webinar? View Recording Here.

Automating Enterprise Planning with EPBCS: A Case Study Featuring Sims Metal Management

Enterprise Planning and Budgeting Cloud ServiceIn using Enterprise Planning & Budgeting Cloud Service (EPBCS) to support annual budgeting and forecasting processes, organizations are choosing solutions that allow them to leverage the financials, projects, capital and workforce business processes necessary to provide a driver-based solution that links expected intake to revenues and costs. In turn, they are able to more efficiently produce integrated income statements, balance sheets and cash flow statements.

Featuring Jim Clark of Sims Metal Management, Our Special Guest

 Our August 16, 2017 webinar, featuring Jim Clark, Group Manager of FP&A at Sims Metal Management, takes a detailed look at how one organization automated enterprise planning to streamline processes and produce better results.

Within a real-world scenario, this means that whether using EPBCS out of the box or as a “hybrid” of OOTB with customized extensions, companies like Sims are able to adjust sales forecasts—throughout the year and through sales cycles—to better match the actual costs and needs in areas such as raw materials and labor.

A Better Approach To Performance Management

Using this integrated approach to Performance Management, companies are, in effect, bringing actual performance numbers, on a monthly basis, into their models.

As a result, changes and adjustments can be fine-tuned and incorporated into the mix.  Forecasts can be based more on actual numbers and less on assumptions, thus leading to a balance sheet that matches projections. From a planning perspective, companies can be more nimble and, ultimately, create their models with greater accuracy.

Whether you are participating live or via a recording, this webinar will illustrate how organizations like Sims are leveraging EPBCS in ways that allow them to: 

  • Gain insight to increase efficiency and improve outcomes
  • Better understand how organizations like yours can make standardization and centralization a top priority
  • See how an integrated solution works not just in theory, but actually in practice
  • Follow the processes to results that include improved accuracy and increased efficiency across the enterprise

For More Information

No matter where your team or your organization is along your EPBCS journey, this webinar is certain to provide you with valuable insight and context that can help you to implement changes that lead to greater efficiency and a more streamlined forecasting process overall.

Register for our “Automating Enterprise Planning with EPBCS: A Case Study Featuring Sims Metal Management ” webinar:

Missed the webinar? View Recording Here.

 

Undocumented Data Export Feature in Oracle Hyperion PBCS (Planning and Budgeting Cloud Service)

In response to companies looking for more decentralized services with less IT overhead, Oracle has launched the Planning and Budgeting Cloud Service (PBCS). PBCS is a hosted version of the Oracle Hyperion Planning and Data Management/Integration (FDMEE) tools with a particular focus on a completely online-based interface. For additional information on PBCS, please click HERE.

From a functional perspective, this is an ideal situation: to have near-full capabilities of an on-premise solution without the infrastructure maintenance concerns. Practically, though, there are some holes to fill as Oracle perfects and grows the solution.

One of the main areas for concern has been the integration of data into and out of PBCS. Data Management (a version of FDMEE) is the recommended tool for loading flat file data into the system, while there is also the ability to load directly to Essbase with perfect files. Getting files out of the system, on the other hand, has not been so straightforward. Without access to the Essbase server, exporting files proves impractical. Companies often need data exports from Essbase for backups, integrations into other systems, or for review. PBCS does not seem to have a native method of being able to extract Level Zero (Lv0) data on a regular basis that could be easily copied out of the system and used elsewhere.

Despite this, the DATAEXPORT command still exists in the PBCS world. How, then, could it be used to get a needed file?

It actually begins as with a normal on-premise application by creating a Business Rule to do a data export. This can be done manually, but it is recommended to use the System Template to make sure everything is set up perfectly.

JP_ScreenShot_2015_03_09_09_10_35

When setting up the location to export the file to, it should be set up as:

 “/u03/lcm/[File_Name.txt]”

JP_ScreenShot_2015_03_09_09_30_59

When this is done, a user can then navigate over to the Inbox/Outbox Explorer and see the file in there:

JP_ScreenShot_2015_03_09_09_42_39

JP_ScreenShot_2015_03_09_09_44_01

And that is really all there is to it! With a business rule in place, the entire process can be automated using EPMAutomate (EPMAutomate and recommendations for an automation engine/methodology will be discussed in a later post) and a batch scripting client to do a process that:

  • Deletes the old file
  • Runs the business rule to do the data export
  • Copy the file off of PBCS and to a local location
  • Push the file to any other needed location

The one important thing to note is that as of PBCS 11.1.2.3.606 (April 2015 patch), all files in the Inbox/Outbox Explorer — along with any files in Application Management (LCM) — that are older than two months will be automatically deleted. As such, if these files are being kept for archive purposes, they must be backed up offline in order to be preserved.

When FDM Isn’t an Option…Using Essbase to Map Data

lots-of-arrowsThere are times when you do not have an option of using FDM to do large data mapping exercises prior to loading data into Essbase. There are many techniques for handling large amounts of data mappings in Essbase, I have used the technique oultined here several times for large mappings and it continues to exceed my expectations from a performance and repeatability perspective.

Typically, without FDM or some other ETL process, we would simply use an Essbase load rule to do a “mapping” or a replace. However, there are those times when you need to do a mapping based on multiple members. For example, if account = x and cost center = y then change account to z.

Let’s first start with the dimensionality that is in play based on the example below: Time, Scenario, Type, NOU, Account, Total Hospital, and Charge Code

Dimension Type Members in
Dimension
Members
Stored
Time Dense 395 380
Scenario Dense 13 6
Type Sparse 4 4
NOU Sparse 25 18
Account Sparse 888 454
Total Hospital Sparse 5523 2103
Charge Code Sparse 39385 39385

You then need to be able to identify the logic of where the mapping takes place.  I will want to keep the mapping data segregated from all other data so I will load this to a Mapping scenario (Act_Map).  I load a value of ‘1’ to the appropriate intersection, always level0.  Since the mapping applies to all Period detail I will load to a BegBalance member.  The client will then update this mapping file from a go forward basis based on new mapping combinations.

Here is a sample of what the mapping file looks like that gets loaded into Essbase:
NOU STATUS Revised DEPT ACCT # CDM Data
SLJ   IP            2CC      2         0012013         1
SLJ   IP            2CC      2         0012021         1
SLJ   IP            2CC      2         0012062         1

Here is what it looks like when you do a retrieve.  So for 4410CC->2600427->IP->67->SVM there is a value of 1 and for 4410CC->2600435->IP->67->SVM as well.

Essbase Mapping

The next step in the process is to load the actual data that ultimately needs to be mapped. I will load this data based on the detail and dimensionality I have, again at level0.  In my experience, the data is missing a level of detail (GL account for project based planning, Unit/Stat for charge master detail, etc.). So this data gets loaded to specific “No_Dimension” member via a load rule or a generic member. Again, I load this data to a separate scenario as well (Act_Load).

In the example below you will see I am loading Account detail (67 & 68 in the above screenshot) to the Stat_Load member. The data comes across missing the account detail.

essbase mapping

The final step is to calculate the Actuals scenario based on the two scenarios above. You will see that after we run the calculation, Current Yr Actuals is calculated correctly in that the data resides where it should reside.

essbase mapping

Keeping all the data segregated in different scenarios allows you to easily clear data should anything be wrong with one of the loads, thereby keeping the other datasets intact. This process runs on the entire year in less than 2 minutes and not only performs the calculation but also does an aggregation for the Current Yr Actuals.

Ushering in the New Era of Hyperion Strategic Finance

Welcome to the first installment of our new Hyperion Strategic Finance (HSF) blog series. Edgewater Ranzal’s HSF team has been working closely with Ranzal’s other Hyperion practices (HFM, FDM, Planning/Essbase etc.) to hone in on how HSF can be utilized to its full potential in accordance with the other product offerings.  As part of that process we felt it was important to start a dialogue (blog) to share some of our insights on various topics ranging from new product release info to cutting edge integration best practices.  We are hoping this series will be a good resource for you and your organization on your HSF journey.

Given the major changes to HSF in the 11.1.2.2 release, and the exciting product roadmap ahead of us, I thought it would be good to start the series by discussing “The New Era of Hyperion Strategic Finance.”

So…What’s new?

With the 11.1.2.2 release comes probably the most significant change to the product since it was acquired by Hyperion.  A shift in the user interface from a traditional thick client to an Excel based Smart View Add-in is at the core of the change.  The enhancement enables the end user to perform the majority of HSF modeling activities directly in an Excel workbook.  With this change, the legacy reporting in the HSF client has been REPLACED with Excel reporting via Smart View.  This means that those who choose to implement 11.1.2.2 will be required to use the Smart View based reporting and/or export the data to an external database (i.e. Essbase) to utilize other reporting tools (i.e. Financial Reports, OBIEE etc.).  For any current HSF users looking to upgrade, it is important to note that the process of upgrading to 11.1.2.2 automatically converts existing HSF reports into the Smart View format, however, existing charts/graphs will need to be rebuilt and some formatting issues have been identified which may require some re-work.  While this does introduce a big change for end users, it also presents a great opportunity by opening up native Excel functionality like allowing the use of Excel graphing, conditional formatting, highlight sums, and the group / ungroup data feature etc. To view the full list of new features you can look through the 11.1.2.2 Read me, available here.

Ush1

Why make the change?

Some existing clients have asked me why this change was made in the first place.  The answer is really two-fold:

1. Tighter Integration with Hyperion Planning: If there has been one consistent theme throughout all of my meetings/conversations with Oracle’s HSF/Planning development team it has been a desire to continuously improve the integration between Hyperion Planning and HSF.  The integration I’m referring to doesn’t stop at data, but also includes seamlessly integrating the end user experience.  Everything from selling the tools as a combined solution (i.e. bundled pricing) to having both HSF and Planning users interact with the tool in a similar manner (i.e. Smart View) have been or will be addressed.  Below I will outline some additional roadmap items that are planned that will continue this theme.

2. Reporting: All legacy HSF users that have spent time creating, formatting, and modifying HSF reports are well aware that reporting has NOT been an area of strength previous to 11.1.2.2.  Both Oracle and our implementation team consistently receive requests to have HSF reporting operate “more like Excel.”  This move is a direct response to those requests, and it is definitely a big step in the right direction.

Current Challenges:

As with any major change / new release there are going to be some growing pains and the 11.1.2.2 release of HSF is no different.  We have been working closely with Oracle over the past couple of months to identify any issues, make recommendations, and test fixes that have been applied.  The main point I want to make clear is that from a functional perspective the tool has the EXACT same capabilities.  You are simply changing the way you do things, not what can be done.  There is a bit of a learning curve to understand the new menu bar, short cut keys etc…but in general it still functions much like its predecessor with the added benefit of the Excel look and feel.

A couple of things to look out for if you will be implementing AND going live prior to the 11.1.2.3 release:

1. Issues with large numbers of active reports:  We have experienced some performance issues when working with a file that has numerous reports (standard or freestyle) open at once.  This includes reduced speed of the check out / in process, flickering upon calc/refresh, and occasional freezing of the application.  Currently, the product seems to work more seamlessly when working with just the accounts tab or 1-2 reports, however, fixing this is at the top of Oracle’s priority list and we expect it to be addressed sooner rather than later.

2. Renaming of Time Periods: You want to avoid renaming the default time periods in the entities (i.e. Changing 2013 to FY13).  In the current release this can cause some calculation issues in system regarding the funding routine.  Again, this has been identified by Oracle and is expected to be resolved in the next patch set or release.

3. Smart View Parity: There are some features that are not currently in the Smart View interface which may require users to revert back to the old HSF client.  Some of these features include the debt/depr schedulers, ECM/ACM etc.  This means that the end user may have to jump between interfaces in certain instances.  The good news is that these features are expected to be added to the Smart View functionality in the next release (11.1.2.3) and can still be utilized in the traditional client if need be.  To see a full list of these items look to the 11.1.2.3 roadmap column below.

4. With the change in the technology, come some changes to the infrastructure component, so it is important to discuss these requirements with your technical team to diagnose hardware and software needs before moving forward. It is also important to note that the upcoming release of 11.1.2.3 will NOT be backward compatible with 11.1.2.2.  So, if you have a multiproduct implementation with integration components you would need to upgrade the entire suite.

Product Direction / Road Map

Ush2

The above picture is a snap shot of the HSF roadmap given to me by the Oracle product team.  You can see the current release (as of this blog post) on the far left, calendar year 2013’s scheduled release (11.1.2.3) in the middle, and the subsequent releases on the far right.  The focus this year is to truly stabilize the Smart View integration and incorporate all of the standard features of the thick client into the Smart View interface.  The exciting part of this is how the changes in 11.1.2.2 have set up the product for even more advancement in the future.  Post 11.1.2.3, a lot of the heavy lifting for HSF’s major changes will be complete.  This will allow the team to focus on true feature enhancements like adding a monthly depreciation scheduler or addressing the concept of parent level scenario modeling (possibly an idea for a future blog post!).  In addition, if you look at the future direction, in the Enterprise Readiness section, you will see items such as LRP Integration to Planning, Automated Data Loads, DRM support etc…This represents a future state which allows a user to manage both data and metadata in a consistent manner between HSF, Planning, and Essbase.  Imagine a world where you can have one excel worksheet open with your HSF model and another with a LIVE connection to an HSF reporting cube (via Essbase) for Ad-hoc purposes!  Not only that but a world where the Essbase cube is automatically updated as you make changes to the HSF model, meaning no manual metadata management between applications or tedious mappings that need to be maintained. This type of enhancement truly empowers the user to focus on the modeling aspects of HSF while allowing Essbase to shine for management reporting – truly using the right tool for the right job.

Our Recommendation

Overall we are very excited about the direction Oracle is heading with HSF.  The Smart View capability in 11.1.2.2 is really just the first step in what we see as a continued effort to make the product better for its users.  With that in mind, we have begun multiple 11.1.2.2 implementations, however, it is understood in all cases that there will be some issues to work through and there are no pending deadlines where these issues could put the project success at risk.  In fact, the Go Live dates are not expected to occur until a time when we believe 11.1.2.3 and / or an 11.1.2.2 patch set will be available.  Given that, we do recommend that all new HSF clients seriously considering starting with v11.1.2.2 while taking note of the challenges mentioned above.  Initially this approach eliminates the need for your end users to learn two different user interfaces.  Additionally, if you are willing to give feedback, as an 11.1.2.2 client you will definitely have the ear of the Oracle team as it pertains to resolving any existing product issues, as well as requesting new enhancements for the future.  Oracle is very eager to make this release a success and they truly value any input early adopters can provide.  So if you have the patience to work through some bumps in the road, and the time to resolve the issues you may encounter, I would definitely encourage giving the 11.1.2.2 release serious consideration.

I hope you found this information helpful.  We look forward to coming out with many more in the future. In that vein, if you have any ideas / request for blog topics please feel free to leave them in the comments section or reach out to me directly at rmeester@ranzal.com and we will make sure to address them in future posts.

About the Author

Ryan Meester is a Practice Director for the Strategic Planning Practice at Edgewater Ranzal.  His first encounter with HSF dates back to 2004 as a Consultant with Hyperion in the HSF practice.  After three years in that capacity, leading projects and assisting with business development efforts, Ryan co-founded Meridian Consulting International with two of his Hyperion colleagues, Andrew Starks and Ricardo Rasche.  At Meridian, Ryan, Andrew, and Ricardo focused exclusively on HSF implementation services until Meridian was acquired by Edgewater Ranzal in May of 2010.  This was a strategic acquisition for both Meridian and Ranzal. Both organizations were seeing more and more multiproduct implementations which required a broader EPM focus. The acquisition effectively rounded out Ranzal’s EPM service offering by adding HSF expertise to their repertoire.

Using Data Visualization and Usability to enhance end user reporting – Part 4: Tying it all together

Now that the foundations have been set in my last three posts, in this final post I’ll share how we can create reports, leveraging:

• Standard definitions and metrics
• The understanding of how users  will consume data and interact with the system

To effectively create reports, make sure to follow these key best practices:

1. Reduce the data presented by focusing on the important information. For example, rather than showing two lines for revenue actuals and revenue budget, try showing one for the difference. Users can identify trends much more quickly when there are fewer objects to focus on.

2. Concentrate on important data and consolidate it into chunks. If you have two charts, use the same color for revenue on both of them. This makes it easier to interpret and see trends between them

3. Remove non-data items, especially the images, unnecessary lines and graphics. This helps the user focus on the actual data, so they can see trends and information rather than clutter.

Here is an example of two reports with the same data. The first provides a table with various colors, bold fonts and line. The second report highlights the important areas/regions. Your eyes are immediately drawn to those areas needing attention. Table two allows the user to draw accurate conclusions more effectively and in a much shorter timeframe.

These are some general practices which can be applied in most cases and will give users a much more positive experience with your reporting system. If you need help making sense of your reporting requirements, creating a coherent reporting strategy or implementing enterprise reporting, please contact us at info@ranzal.com.

Using Data Visualization and Usability to Enhance End User Reporting – Part 3: The Balance between Data and Visual Appeal

In part three of my blog series, I’ll provide an overview of the important balance between data and visual appeal when creating reports, including some of the latest research and findings.

Many users believe that once you have the metrics in place and understand what data users want, the next step is to create the reports.

In reality, a lot of thought and a careful eye are required when making design considerations to create charts, grids and tables that convey the details in the simplest terms for user understanding. The right design choices enable users to see easily the trend, outliers, or items needing attention.

Many people think that the more data they can cram in, the better. However, studies have shown that the average person can only store 6 chunks of information at a time.  Depending on how flashy and distracting your graphics and marketing logos are, you may have already used up half of your brain’s capacity, without getting to any reports or dashboards.

Graphic overload may make one consider removing all distracting graphics, highlights, bolds and visual clutter to show the data – novel concept right?

But this is not the solution. There has been lots of visualization studies and research done over the past century that have uncovered that eliminating graphics altogether is not the solution to this dilemma.

In fact, there are several leading experts on this topic, including three key people, who are leading the charge against clutter and visual distraction, cheering for more measured and thoughtful chart and dashboard visual design. These individuals are:

·         Edward R. Tufte

·         Colin Ware

·         Stephen Few

All three have published several books explaining how we interpret visual data, including what makes our eyes drawn to color and form, and what aids understanding. It also explains “chart junk” – a term first coined by Tufte in 1983. Tufte defines “chart junk” as simply:

Conventional graphic paraphernalia routinely added to every display that passes by: over-busy grid lines and excess ticks, redundant representations of the simplest data, the debris of computer plotting, and many of the devices generating design variation.”

The key concept of “chart junk” leads into another of Tufte’s mantras called the “Data Ink” ratio. The idea here is that by minimizing the non-data ink you are maximizing the data ink.  In other words,  that you can achieve the ideal balance of data and design by removing borders, underlines, shading and other ink elements which don’t convey any messages

There are a lot of available resources out there on this topic by these authors and others.

Stay tuned for my final blog post, in which I will demonstrate how to effectively put these concepts  into practice when creating reports.

Ranzal UK are presenting at UKOUG 2012

I’m please to announce that Ranzal UK consultants are presenting papers on both days of the UKOUG Hyperion conference this year.  The conference takes place on October 23rd & 24th at the Park Plaza hotel in central London.

http://hyperion.ukoug.org/

On the Tuesday my colleague Mark Drayton will be talking about a recent implementation of Hyperion Planning and HPCM in the session HPCM and Hyperion Planning: A Cost Allocation Process‘.   This will be especially interesting to those of you who want to see how Mark made use of batch updates to HPCM Stage 1 Drivers and Assignments.

On Wednesday, our Senior Consultant Alecs Mlynarzek will be delivering a presentation titled ‘Planning Integration with Task Lists‘, which describes techniques we have used to enable users to launch integration tasks from the Task List UI within Hyperion Planning, a feature not available out of the box.

Also on the Wednesday, Ranzal’s UK Infrastructure Lead Dave Hogg and I will be presenting various innovative techniques that we have implemented for clients in order to meet technical challenges – this session is called ‘Guide to Using EAS to Maintain Batch Environment Variables‘.

We really hope that you can come and attend one of our sessions – please come and say hello and we can tell you more about how we are growing, and the exciting projects that we are currently delivering.

Techniques for Creating, Loading, and Optimizing a Simple Essbase ASO Application

A couple of recent projects have required us to build an Essbase database to provide a subset of upstream system data for downstream consumer systems such as Hyperion Profitability and Cost Management (HPCM).  The process included dimension updates, data loads and custom calculations. Essabase Aggregate Storate Option (ASO) was the chosen Essbase technology because we were potentially dealing with large data volumes, relatively simple hierarchy structures, and only a small number of custom calculations that could be easily modeled in MDX within minimal performance impact.

The principle was that an overnight batch would be used to completely rebuild the ASO cube each night, including any metadata restructures that were necessary, followed by a full reload of data.

The high level process is as follows:

The starting point was to use a ‘stub’ application as a template for the metadata rebuild.  This is an ASO Essbase application with all dimension headers present, all POV dimensions present (Years, Periods, Scenarios etc), and all volatile hierarchies represented by the hierarchy headers only.  This ASO application serves as a “poor man’s MDM” which allows us to have application, dimension and hierarchy properties all pre-set.  The main advantage of the stub outline is that it creates a natural defragmentation of the target ASO application which improves query performance, and reduces dimension build times to the minimum. This is analogous to a relational database where you want to ‘truncate’ tables and/or compress, as opposed to deleting and reading all the time – there is a gradual growth.  A good tip is to defragment build dimensions in order from smallest to largest in terms of volumes.

A sample ‘Stub.otl’ outline looks something like the following. In this case, the stub outline is modeled after the new embedded Fusion G/L Essbase cube:

As can be seen, the volatile dimensions (Budget centre, Balancing Entity, Accounts, etc) are each populated with a single hierarchy header (e.g. BE_dummy) whereas the static dimensions (AccountingPeriod, Balance Amount etc) are complete, and will not be the subject of a dimension load in the MaxL.  Static dimensions which contain members with MDX member formula will persist (although the formula will not necessarily validate at this stage as they may depend on members that have not yet been rebuilt).

The first part of the batch process is to use this Stub outline to replace the outline in the ‘user’ ASO cube (i.e. the cube that will be restructured and loaded with data).  The MaxL will clear data & replace the .otl file in the user application with the .otl file from the ‘stub’ application

A simplified version of the MaxL is as follows (normally passwords would be encrypted):

This simply copies the Stub.otl file into the ‘user’ ASO cube database folder & names it with the target database name – it will be available as soon as the application is reloaded.

The next section in the MaxL would be a standard dimension build of those volatile dimensions – the primary consideration when building the hierarchies is that the ASO restrictions on hierarchies are met otherwise the outline will not verify.  This is not covered here – we assume that incoming master data is pre-validated to meet these requirements, but the summary of dimension rules for ASO is as follows:

  • ASO dimensions can contain hierarchies of 2 types – ‘Stored’ or ‘Dynamic’
  • A dimension must be tagged as Multiple Hierarchies Enabled’ or “Dynamic” if it contains two or more hierarchies
  • The first hierarchy in a dimension where Multiple Hierarchies enabled is specified  must be defined as a ‘Stored’ hierarchy
  • Stored Hierarchies are generally only additive as they only allow the + or ~ consolidation operators
  • Dynamic Hierarchies can contain any consolidation operators, and members can contain formulas.
  • For alternate hierarchies, where shared members may be required, Stored hierarchies can only contain one instance of a member (to avoid double counting), but subsequent Stored hierarchies can contain members previously defined in previous stored hierarchies

Once metadata has been loaded, the data load can be carried out.

Once this is complete, we have a fully loaded ASO cube, which we can retrieve data against using either SmartView or an Essbase report script (for example, when we are supplying filtered data to our downstream systems).

The example Smart View retrieve template below is a straightforward report with periods as columns and 550 rows of level 0 Budget Centres, with all other dimensions set as filters.

The Essbase application log shows that the above SmartView query took over 16s to execute.  This report layout may or may not be representative of real world queries / reports but the object of the exercise here is to speed this up for in-day usage.

ASO databases do not use calculation scripts to consolidate the data so the traditional BSO approach to consolidation cannot be used.  Instead, ASO will attempt to dynamically calculate upper level intersections, which, while resulting in much faster batch processing times, may result in longer than necessary retrieval times.

What we can do to improve this situation is use the ‘Query Tracking’ facility in ASO to capture the nature of queries run against the ASO cube, and build retrieval statistics against it. These statistics can then be used to build aggregation views tailored to retrieval patterns in the business.

This relies on us having some predefined definitions of the kinds of queries that are likely to be run – SmartView report templates, Web Analysis pages & Financial Reports definitions will all be suitable.

In this example, we use the above SmartView template as a basis for creating an Essbase Report script as follows:

This report mimics the SmartView template, and we use it during the overnight batch to capture the query characteristics using Query Tracking. One of the reasons to use report scripts is that if you use the query designer (or the Spreadsheet Retrieval Wizard if you are using a REALLY old version of the Excel Add In), it can save a report script output. MDX queries will have a similar affect.

The sequence of MaxL steps is as follows:

  • Switch on Query Tracking
  • Run one or more Essbase Report Script(s)
  • Run ‘execute aggregate process’ command to create aggregate views

The MaxL to accomplish this is as follows:

The ‘execute aggregate process’ command is issued with the ‘ based on query_data’ option to tell Essbase to use query patterns picked up by Query Tracking to build the aggregation views.  Essbase will build as many views as necessary until the ‘total_size’ limit is reached.  This limit may need tweaking so as to give the desired improvement in performance whilst also conserving disk space (which may get swallowed up with larger ASO cubes).  The particular example also runs in a matter of seconds, but the addition of more sample reports needs to be managed to ensure that the batch run time does not exceed its window.  It should be noted that one can process hierarchies without the query tracking, but there are restrictions on what alternate hierarchies get processed, and this is a very good technique when you are trying to improve performance on “alternate rollups”.

When this has been executed, users should see an improvement on query performance.

Our SmartView query was rerun, and the log file demonstrates the reduction in query time to less than 1 second :

This approach lends itself to situations where the ASO outline is likely to change frequently.  Changes in metadata mean that aggregation views created and saved in EAS cannot necessarily be reused – new level 0 members will not necessarily invalidate the aggregate views, but new upper level members, or restructured hierarchies definitely will invalidate these views. The rationale for this is because the ASO aggregation engine constructs multiple “jump” points based on the most recent level hierarchy  –  if I were going to oversimplify what was happening in a BSO world, imagine level zero stored, level one as dynamic calc, level two stored, level three as dynamic calc, and level four stored. In any instance, there would never be more than one level of dynamic calc. I don’t know if this is still the case, but this may be why ASO cubes seem to like symmetrical vs. ragged hierarchies a bit easier – it makes the derivation of what should be calculated vs. dynamic easier.