Alithya Leverages the Power of Oracle Hyperion FDMEE

One of the biggest challenges of every organization nowadays is to provide reliable data for a clear business outlook. This essential activity is more critical than ever now that the solutions for hosting data are increasingly varied, with multiple scenarios involving on-site hosting, Cloud, and hybrid solutions. However, there are solutions that allow companies to efficiently and seamlessly navigate amongst the different hosting solutions. Alithya Group (NASDAQ: ALYA, TSX: ALYA) (“Alithya”) is well positioned to advise its clients on this topic.

Efficient management of data requires solid know-how.

As companies attempt to develop long-term guidance in this area, Alithya ensures that its clients’ data hosted in different environments continue to be used effectively. Alithya’s Data Governance and Integration practice includes specialists in Data Integration to help free up resources leveraging FDMEE for data validation and to maximize FDMEE with its offering for financial data application review.

Alithya’s Tony Scalese published a book providing deeper understanding of FDMEE.

Banking on the numerous mandates Alithya Group has been entrusted by its clients as a market-leading provider of Oracle Enterprise Performance Management Platform solutions, the company leverages the power of Oracle Hyperion Financial Data Quality Management, Enterprise Edition (FDMEE) to help organizations enhance the quality of internal controls and reporting processes. The extensive Alithya team specializing in these FDMEE solutions has among its ranks a widely recognized expert in the market, Tony Scalese, VP of Technology at Alithya and Oracle ACE, who published The Definitive Guide to Oracle FDMEE [Second Edition], in May 2019.

Connecting current on-premise and future Cloud solutions.

“As thought leaders, we are committed to providing essential resources to help clients enhance the quality of internal controls and reporting processes,” stated Chris Churchill, Senior Vice President at Alithya. “Our Data Governance and Integration practice aligns offerings with best practices and includes a team of dedicated experts as well as some of the most comprehensive resources in the industry.”

Sharing real-world FDMEE deployment strategies.

It is the great interest of Tony Scalese for the integration of data and the sharing of his great knowledge with a maximum of interested parties that led him to publish books on Oracle FDMEE. After a first edition that was very successful in 2016, he just launched the second edition of The Definitive Guide to Oracle FDMEE. Since many organizations are now considering or have begun migrating to the Cloud, the book provides a deeper understanding of FDMEE by informing readers about such topics as batch automation, Cloud & hybrid integration, and data synchronization between EPM products.

“FDMEE can integrate not only with on-premise applications, but also Oracle EPM Software as a Service (SaaS) Cloud Service offering,” says Tony. “It provides the foundation for Cloud Data Management and Integrations which are embedded in each of the EPM Cloud Services.  A deep understanding of FDMEE ensures that integrations built on-premise or in the Cloud function well and stand the test of time.”

Why Jython for FDMEE

Originally published here May 19, 2014 on ODTUG.comkscope14-logo

Contributed by:
Tony Scalese, Integration Practice Director
Hyperion Certified
Oracle ACE
Edgewater Ranzal

In the release of the EPM stack, Oracle introduced Financial Data Quality Management, Enterprise Edition, or FDMEE. FDMEE was not entirely a new product but rather a rebranding of ERP Integrator (ERPi), which was released in FDMEE actually represents the convergence of two products – FDM (now known as FDM Classic) and ERPi – and represents the best of both products. Besides the obvious changes – a new user interface (UI) that is integrated into Workspace, leveraging Oracle Data Integrator (ODI) as its data engine and direct integration to many Oracle and SAP ERP source systems – FDMEE introduced one rather significant change: Jython as its scripting language.

For organizations that have a significant investment in FDM Classic, a new scripting language likely represents one of the most daunting changes in the new data management tool known as FDMEE. Before we continue, let’s briefly talk about scripting in FDMEE. Customers face a choice with FDMEE scripting – VBScript or Jython. I have spoken with a number of customers that have asked, “Can’t I just stick with VBScript because it’s very similar to the FDM scripting, which was basically VBScript.” The technical answer is, in most cases, yes, you likely could. The more thought-out answer is, “Have you considered what you are giving up by sticking with VBScript?” Well, that really isn’t an answer, is it?

Let’s take a moment to understand why I ask that question. Let’s consider at a high level the differences in these two languages. For Wikipedia: VBScript (Visual Basic Scripting Edition) is an Active Scripting language developed by Microsoft that is modeled on Visual Basic. It is designed as a “lightweight” language with a fast interpreter for use in a wide variety of Microsoft environments.Jython, successor of JPython, is an implementation of the Python programming language written in Java.

Take a moment to consider the Enterprise Performance Management (EPM) stack at Oracle. Have you noticed any trends over the past two to three years? In, Oracle rewrote the UI for HFM using the Oracle ADF framework. In, Oracle removed all but the most basic functionality from the HFM Win32 client. In, HFM is planned to be platform agnostic meaning it can run on virtually any operating system including Linux and UNIX. Have you heard about this nifty new appliance called Exalytics? My point in this trip down memory lane is that Oracle is clearly moving away from any reliance on Microsoft technology in its product stack. Any time I have said this, the question inevitably is asked: “Do you think we’ll still be able to run EPM on Windows servers?” My answer is a resounding YES. Oracle may not be the biggest fan of integrating Microsoft technology into its software solutions, but they are smart enough to understand that failing to support Windows as a server platform would lock them out of too large of a share of the market. So breathe easily; I don’t see Oracle producing EPM software that won’t be able to be deployed on Windows servers.

The EPM stack is moving toward becoming fully platform agnostic. Exalytics is, for those of you who are not familiar, an in-memory Linux or Solaris machine that delivers extreme performance for Business Intelligence and Enterprise Performance Management applications. At a very high level, these machines have an extraordinary amount of memory (RAM) that allows the entire database to be brought into memory. The result is incredible performance gains particularly for large applications.

There are at least two constants with technology. First, data continues to grow. The demand for more data to support business decisions is no exception. The other constant is that hardware continually improves while the cost always drops. I can’t envision Exalytics being an exception this. Today’s Exalytics machines often cost six figures, and that may not be an expense your organization can justify. However, in two to five years, your organization may require an Exalytics machine, and it may well be an expense you can justify.

Given this bit of background, let’s talk about why I firmly believe Jython is the better choice for your FDMEE application. As the EPM stack moves toward being platform agnostic, I believe that support for supporting Microsoft technologies such as VBScript will slowly diminish. The application programming interface (API) will continue to be enriched for the Jython language set, while the API for VBScript will be less robust. Please keep in mind that this is just my prediction at this point. But Oracle is no different than any other organization that undertakes technology projects. They employ the same three levers that every project does – scope, time, and budget. As a customer of EPM, you have noticed the speed at which new releases have been deployed. To continue to support two sets of APIs within these accelerated development timelines will require one of the remaining levers to be “pulled.”

That leaves us the scope and budget levers. To maintain complete parity (scope) with a fixed timeline, the remaining lever is budget. Budget for any technology project is heavily correlated to people/projects. Add more people to the project, and the cost goes up. As I said before, Oracle is no different than any other organization. Project costs must be justified. So the development head of FDMEE would need to justify to his senior management the need to add additional resources to support having two sets of APIs – one of which is specifically for Microsoft technologies. One can imagine how that conversation might go.

So we’re left with the scope lever. There are two APIs – one to support Jython (JAVA on Python) and a second to support VBScript (a Microsoft technology). Let’s not forget that Oracle owns JAVA. Which do you think wins? I hope that I have built a case to support my previous conjecture about the expected richness of the Jython API vs. the VBScript API.

Let’s say you believe my above predication is wrong. That’s OK. Let’s focus on one key difference in these technologies – error handling. Throughout my years of developing scripts for FDM Classic, the reoccurring theme I heard from customers was when a process fails, can the system alert me? The technical answer is, most likely. The practical answer is no. While in a VBScript routine I can leverage the On Error Resume Next in conjunction with If Err.Number = 0, I would need to do this after every line of code and that is simply not realistic. The best solution I have found is writing scripting operations to a log file that can then be reviewed to identify the point at which a script fails. While this approach has helped, it’s not nearly as elegant as true error handling like what is available in Jython.

Jython provides error handling through the use of the Except keyword. If you have ever written (not recorded) Excel VBA macros, you may be familiar with this functionality. In VBA, you would code On Error Goto ErrHandler and then have code within an ErrHandler section of the script that performs some operation in the event of an error. Within Jython, there is a similar, albeit more robust, concept with the Try – except keywords. For example:

def divide_numbers(x, y):
      return x/y
   except ZeroDivisionError:
      return ‘You cannot divide by zero, try again’

In the above example, the Except clause is used to handle division by zero. With Jython, you can have multiple Except clauses to handle different anticipated failures in a process. Likewise you can have a catch-all (finally) clause to handle any unexpected failures. A key functionality with the Except clause is the ability to capture the line in the script that caused the failure. This is a key improvement over VBScript.

We could continue to delve further into the technical details of why Jython is a more robust language, but when I think about this conversation in context of, “Why do I want to use Jython instead of VBScript for my application?” I think the above arguments are compelling on their own. If you are interested in learning more about Jython scripting for FDMEE, please attend my session at Kscope14:

Jython Scripting in FDMEE: It’s Not that Scary on Tuesday, June 24, at 11:15 AM. Scripting in FDMEE: It’s Not That Scary

Can FDM Do That?

Most everyone that uses or has seen a demo of Oracle Hyperion Financial Data Quality Management (“FDM”) knows the basic functionality that FDM provides – mapping, data loading, data validation and financial statement certification.  But FDM is so much more than the basics. 

FDM is a very open architecture product readily supporting advanced customization.  As I highlighted in my last blog post, the workflow process (Import  Validate  Export  Check) can be fully automated and executed on a scheduled basis using either FDM’s internal scheduling component or any Windows batch based scheduling tool that an organization prefers.  But that’s just the tip of the proverbial iceberg. 

Any organization that has recently experienced a Hyperion product upgrade – for example, System 9 to Fusion or Enterprise to HFM – knows the pain of revalidating years of financial data.  This exercise can easily take weeks.  Not only is this process time consuming, it’s tedious and often prone to error.  More importantly, data validation can be one of the biggest risks to a project.  The need to improve seems obvious. 

To address this opportunity, we developed a custom solution that leverages HFM and FDM’s advanced features, custom scripts and batch loading. The benefits are substantial – literally tens of thousands of data points can be validated (and re-validated) in minutes – with 100% data accuracy.  This process is easily extendable not only to other Oracle/Hyperion products like Planning & Essbase but potentially to other data stores.

The benefits of this process may be obvious but let’s take a moment to think about them:

  • 100% Data Accuracy – How valuable is this to your organization in the current economic and financial market climate?  The cost of restated financials is far too great to fathom – potential for government fines, reduced shareholder equity and even loss of one’s job.
  • Shorten Implementation Timelines – How nice would it be for your project to come in on time or early?  Using this solution, you can realistically trim weeks if not months out of a project timeline. 
  • Reduced Implementation Costs – let’s face it, in this economy, every dollar needs to pay dividends.  Whether you choice to leverage a consultant, temp, intern or internal resource to validate your data, there is a cost associated with it.  Reducing the time associated with this activity will reduce your project cost. 

I invite you to check back often as I’ll continue to discuss “outside the box” solutions that can add significant ROI to your FDM investment.   

Contributed by:
Tony Scalese, FDM Design Lead
Hyperion Certified Consultant – HFM
Ranzal & Associates

Adding Drill Through to your Hyperion Implementation – Part 2

So I have been told I blog with the frequency of a solar eclipse, and with respect to this two-part series, several interested parties via e-mal have indicated I have done the equivalent of a network TV season ending episode – “just wait the whole summer folks for the answer to these questions”. Thanks to all of you who “gently” pinged me to remind me to complete the blog.


So for example, take the following view one might see at the bottom level (level zero, or base level members) of an Essbase cube:


Notice the accrual for Ranzal services that is visible, and multiple transactions are available at the weekly level. A further explosion into AP would reveal the identity of the vendors, and other relevant info looking something like this: 


So as can be seen from the above example, relevant data from multiple tables is required from the drill through view. With the accruals that are made, there is relevant information in the journal descriptions; for the AP system there is vendor detail that is valid. The solution to this drill through is the creation of a view (called VendorFact in the above schematic) that is a union of the two data sources, filtering on data source. The query might look something like this:

/**Begin pseudo-code***/

Company, Dept, FY, Per, Acct, JournalID, JournalDesc as Desc, PostingDate as MyDate, ‘00000’ as Vendor_ID, ‘00000 as Doc_Nbr, ‘0’ Line_Nbr, PostedBaseAmt as Amt
Source <> ‘AP’



Company, Dept, Year(Trans_Date) as FY, Month(Trans_Date) as Per, Acct, ‘AP’ as Source, ‘NONE’ as JournalID, LineDesc as Desc, TransDate as MyDate, Vendor_ID, Doc_Nbr, Line_Nbr, Trans_Amt
/**End pseudo-code***/

The following view might result:


With the required view in place, a drill through query could then be created in Hyperion Web Analysis, using the relevant GL chart fields (Company, Department, Fiscal Year, Fiscal_Period, and Account) as filters in the selection query as follows:

SELECT * from VendorFact
Where Company = [CompanyToken] and Dept = [DeptToken] and Account = [AccountToken] and FY = [FYToken] and Per = [Month]

Once this drill through query had been created, a drill-link could be added as a menu item on the Summary Expense analyzer view that was directly querying the Essbase database. When the user had drilled down to the bottom of the cube, the next action would open up the new Web Analysis report, pass the relevant parameters to the query, and display a relational grid like the one above.

To give you a flavor of what this might look like, check out the example below from another type of application…..

Step 1 – Create the Summary “Essbase” View, and establish Drill Links to target detail report

  • The report is setup so that all dimensions except for currency must be selected at the lowest level. This is done as a sample and to put the constraint on the report so that it returns only expected rows.
  • To drill through, first select data down to level 0 for all dimensions. Next double click the actual data cell you want to drill into.
  • The drill through report will be initialized with the parameters passed. If dimensions are not at lowest level, a “no data available” message will appear on the drill through.
  • Drill Link Options Configuration
    This is the configuration to point the report to the Drill Through Report. It is accessed by opening the report and right clicking in a cell selecting “Drill” and “Drill Link Options…”


  • The drill link options page pops up. You specify the name of the report you want to open, and what to pass as the “WHERE” clause to the next report.


Step 2 – Create the detailed Web Analysis view accessing the relational data

  • The drill through report is accessed by the FinPlan report based on the dimensionally that is picked. There’s a 10,000 row limit on drill-throughs, as any set of detailed transactions that can be drilled to should never exceed even a few hundred records. If this setting was not enabled, opening the drill through report directly would fail as too many records would be loaded.
  • SQL Query Configuration
    This screen is accessed by right clicking on the Drill Through Report Grid and selecting “Edit Query…”
  • The JDBC-ODBC bridge driver is used to connect back to the hyperion data mart housing the drill through data. Drill through users will leverage a shared drill through connection..The row limit and fetch size are also configured here.


  • Click the “Query Builder” button in the screen above and another dialog box opens. This is where the drill through mappings are configured. Click Mappings and notice that all dimensions are configured to their drill through counterparts in the vw_GL_Transactions_DT view.


  • When executing the drill through, an output like this can be generated.
  • screen5


  • As a final note, in constructing the view, particular care must be paid to the structure of the member names in the Essbase outline. For, example, core chart fields such as Departments and Accounts are often prefixed or suffixed to make them unique (e.g. instead of “000610” one might have “Dept_000610”). In constructing the drill through view, the member names must match those exactly as defined in Essbase, in order for the filtering aspects of the queries to execute properly.
  • Other Drill Through Choices
    So one problem with Essbase is that it has multiple ways to do everything. For example, how many ways are there to load data? (Answer: For block storage cubes, flat file via rules file; sql interface with rules file; EIS data load via ODB connection; lock and send via Excel; lock and send via smartview; planning web forms; FDM adapter; DIM adapter; ODI apdater; HAL adapter; I am sure I missed a few). There are 2 command line automation interfaces for goodness sake (MaxL and Esscmd). Transaction drill through is no different. Approach wise, the one consistent thought is this: never drill back to the live transaction tables. Ok, so never is a pretty strong word, but let’s just say as a general rule of thumb, doing so is not a good idea for the following reasons: •
  • Your Essbase cube is a snapshot of data from a load. Querying a live database for open periods implies the sum of your details may not equal your account balances
  • Performance – transaction databases are optimized for write based operations, and highly normalized. As we have defined in the above example, creating denormalized views for specific queries will make your DBAs and end users happier.

Now, technology gets better (hardware and software), and with some of the new items coming from Oracle, out of the box, they are trying to get you to a place where EPM apps are more real time, so the rules above won’t always apply, but for now, it’s a good starting point.

In general when deciding on drill through options, you want to ask the following questions:

  • What products do I need to drill through from?
         o Web Analysis
         o Interactive Reporting
         o Financial Reports o Excel Add-In
         o Smart View
         o OBIEE
  • What is the type of content I want to drill through two?
         o Transactions in a relational database
         o Some other content served up via a URL (for example, very clever way to serve up document images)
  • What level of the source cube do I need to drill through from?
         o Either level zero, or the top of a dimension
         o Every level of a non-ragged hierarchy
         o Custom groups and ragged alternate rollups
  • What release of the software are you using (Fusion vs. 9 vs 7.x)

For example, if you want to enable drill through from every product, and need to support ragged or standard hierarchies, you are looking at an Essbase Integration Services or Essbase Studio solution. Ranzal has some creative techniques for enabling drill through via EIS on cubes not built with EIS (thanks to our handy outline extractor), which makes it useful for Planning, but once your concern over “newness” has abetted, Essbase Studio allows you to enable drill through the right way – build your cube first, then add the drill through later on. Not the other way around.

On the other hand, if you are looking at something quick and dirty, a Web Analysis or IR view in the short term can get you where you need to be.

Either way, Oracle, particularly with its continued emphasis on OBIEE/Essbase integration, is looking to provide more out of the box options to enable drill through. For example, if you haven’t seen it, check out the Ranzal webinar on Financial Data Management; the latest releases are geared to provide drill back from Planning to FDM, and then ultimately if you load from Oracle Financials, drill all the way back to the ERP.

This blog was meant to give you an idea of options and approaches for enabling drill through on your Hyperion application. While its great to have options, remember the golden rule – when you make your choice, know WHY you chose that path. Design is basically the optimization of an approach against one set of requirements over another.

Until next time, good luck.

Authored by:
Mike Killeen – Ranzal and Associates

For more than eleven years, Mike Killeen has scoped, designed, and implemented Planning & Essbase applications for Ranzal & Associates, an award winning, preferred consulting partner of Oracle Hyperion. Currently serving as practice director, Mike provides guidance to Hyperion product management, Ranzal customers, and internal consulting resources alike on best practices related to the implementation of Hyperion System 9. Mike is a certified Hyperion Professional, with certifications in Essbase, Planning, and HFM. He graduated cum laude from Cornell University with a degree in engineering, and received his MBA from the University of Connecticut.

BI for the BlackBerry Crowd

During the recent COLLABORATE 2009 Conference, Ranzal was selected to present a session designed around showing how to use the reporting tools from either traditional Hyperion or Oracle (formerly Siebel Analytics) and deliver content to smart phones.  The session started off with an overview of smart phones, methods of delivery to those phones and some potential pitfalls and considerations, such as what if a user loses their phone? What kind of security policies need to be in place?  Then a couple quick demonstrations in Hyperion were given which included a few tips and tricks on formatting.  This was all done using Interactive Reporting and Workspace.  Lastly, a couple quick demos in OBIEE Answers were provided to the attendees.

The demos were done in Hyperion version 9, although the content was pertinent for version 11 as well.  Some of the common themes in the demonstrations were focus around timely information (i.e. there is no need to send a month report to a cell phone) and focus on exceptions instead of a whole data set (cell phone reporting should be more around focusing on a problem that needs attention than sending a whole dashboard or report).

A copy of the presentation from COLLABORATE 2009 can be found at

Adding Drill Through to Your Hyperion Implementation

This is part one of a two part blog where I will discuss a general overview of how within the framework of a Hyperion Planning or Hyperion Essbase implementation, organizations may configure Hyperion to provide drill through to transaction details. In this first blog, I will discuss implementation background, and part two will provide information related to implementation approaches.

The final output of most Hyperion Planning implementations is the generation of a Financial Plan.  While various “sub-ledger” plan types may exist to support the generation of this plan (Revenues by Customer and Product; Salary Expense By Employee; Capital Expenditures by Asset Class), the fundamental level of granularity is usually equivalent to the various chart fields in the organization’s general ledger, summarized on a MTD basis.  For certain ERPs, this corresponds to the GL Account Balances table, or an aggregate of the transaction details. Depending on your ERP package, this can be one of the following:

  • Lawson – gl_trans
  • Oracle E-Business – gl_je_lines
  • Peoplesoft – PS_Ledger
  • JD Edwards – F0902
  • SAP R/3 –
    o Profit Center Accounting – GLPCT
    o Cost Center Accounting – COSP
  • Microsoft Great Plains – GL11110

For financial implementations, you are usually looking at the following standard chart fields:

  • Account
  • Fiscal Period
  • Fiscal Year
  • Currency
  • Legal Entity or Company
  • Management Chart Fields
    o Department or Cost Center
    o Product or Profit Center
    o Geography or Location

Inherent within the Hyperion environment is an ability to drill to a lower level of detail, usually to answer a question.  This type of functionality manifests itself in several forms, and the following terminology is used:

Drill Down:  Drilling from one level to another of more detail in one dimension of a cube (for example, expanding quarters into months)
Drill Up: Drill from one level to another of less detail in one dimension of a cube (for example, collapsing quarters into year)
Drill Thru: Drilling out of an Essbase cube into a RDBMS source system (for example, expanding months into days)
Drill Across: Drilling from one Essbase cube into another, normally to explode dimensionality that exists there (for example, on a MTD basis, while looking at Revenue in the Financial Cube, drilling across to expose Customer Detail in a Customer Cube)

The actual drill process from a user interface could be as simple as expanding the rows on the user’s grid (as in a drill down), to launching a new grid which creates a new point of view in a drill-thru and drill-across operation.

Much like the relationship between Planning sub-types and the primary financial cube, there is a relationship between the General ledger, and various sub-ledgers that feed it.  Primarily, detailed information is summarized prior to being posted. For example, individual employee payroll data may get aggregated by department and posted as one line into the ledger. Or Ship-To Customer and individual product SKUs may get aggregated into Channels and Profit Centers prior to be posted. In any event, when talking about exposing additional granularity in Financial cubes, it is critical to define what types of details the user wants to see, as there may be multiple steps in that process.  Because of this, it is very rare that a simple drill through into a general ledger will yield all of the results a user wants to see, since many of the results, while at least expanded from a monthly to a daily basis, will still be summarized across the other dimensions a user may wish to see (such as Vendor). This leads to the design of two new components:

  1. Delivery of additional operational cubes, that expose details (such as Employee, Customer, and Vendor) for Drill-Across operations
  2. Development of custom-data marts that create hybrid views integrating different levels of granularity of multiple data sources.

The overall schematic might look something like the workflow below:

 Hyperion Implementation


In the example above, a new vendor “view” or table has been created to support the drill through view requested by the user, in a top down fashion.  Notice that the table actually pulls from two sources:

  1. GL Trans – the general ledger transaction table where individual journals are posted
  2. The Accounts payable table, where individual transactions exist

The need for both tables occurs because accruals may be made to the accounts in question, and those entries are generally made directly in the GL Transaction table.  So for the detailed drill through table to tie to the financials, it must contain both elements. However, since the AP element exists in both (detailed in AP, summarized in GL), special care must be made to avoid double-counting the entries that come back.  Since many GLs contain a field that identifies the source system (AP, AR, etc), or has a journal mask to accomplish the same, this is usually readily available.

In my next blog (part 2), I will discuss practical techniques for implementing the above.

Mike Killeen – Ranzal and Associates

For more than eleven years, Mike Killeen has scoped, designed, and implemented Planning & Essbase applications for Ranzal & Associates, an award winning, preferred consulting partner of Oracle Hyperion. Currently serving as practice director, Mike provides guidance to Hyperion product management, Ranzal customers, and internal consulting resources alike on best practices related to the implementation of Hyperion System 9. Mike is a certified Hyperion Professional, with certifications in Essbase, Planning, and HFM. He graduated cum laude from Cornell University with a degree in engineering, and received his MBA from the University of Connecticut.

Can FDM reduce my monthly close cycle?

How many varieties of gold fish do you have?

How many varieties of gold fish do you have?

With many of our clients, Oracle / Hyperion Financial Data Quality Management (FDM) is the preferred data loading mechanism for Hyperion Enterprise and Hyperion Financial Management (HFM) owing largely to its intuitive end-user interface. We find that FDM users will often refer to their progress in terms of how many gold fish they have. And no, these aren’t the Pepperidge Farm variety.

While the data load process is very straight forward, it does require a user to extract a trial balance from the general ledger, import the file into the FDM repository, fix any mapping issues and then export the data to Hyperion Enterprise/HFM. This means that a close cycle is only as timely as your least efficient business unit.

We have worked with clients to leverage the capabilities of FDM to provide an elegant solution that is intended to drive out efficiencies in your company’s close cycle. FDM is configured to extract trial balance data directly from a general ledger, load into FDM’s repository, map unmapped members to a suspense account/entity and then load & consolidate Enterprise/HFM.Leveraging Ranzal & Associates’ expertise with FDM, this solution can help your close cycle become more streamlined and managed by exception – potentially resulting in days being removed from your existing close cycle.

Contributed by:
Tony Scalese, FDM Design Lead
Hyperion Certified Consultant – HFM
Ranzal & Associates

What is Temporal translation and how does HFM handle that?

Hyper-Inflationary translation means you must use what is called ‘Temporal’ as opposed to the common ‘Current’ method (which is out of the box).

Under the temporal rate method, the objective is to measure each subsidiary transaction as though the transaction had been made by the parent. Monetary items (e.g. cash, receivables, inventories carried at market, payables, and long-term debt) are remeasured using the current exchange rate. Other items (e.g. prepaid expenses, inventories carried at cost, fixed assets, and stock) are remeasured using historical exchange rates.

The Temporal Method:

  • Monetary assets and liabilities (cash, liquid securities, accounts payable and receivable, debt) are converted at the current rate of exchange. – default rates in the system .
  • Nonmonetary assets and liabilities (fixed assets and inventory) are translated at historical rates. Thus no accounting capital gains or losses arise from these items. – In HE, I would do this via USD overrides.
  • Income state items are converted at the average exchange rate for the accounting period unless, as in the case of depreciation or cost of inventory sold, they are directly associated with nonmonetary items. In this latter case the historical cost is used for the translation. – Same as above using overrides.
  • Dividends and other distributions are converted at the current rate of exchange at the time they were paid.
  • Under the Temporal-Rate Method the net gain does go into the consolidated income statement but since no fluctuations in the value of fixed assets occur the effect on net income is moderated. Because the Temporal-Rate Method uses different exchange rates for different account items there is a problem in the consistency of the accounts. This is a rule you would add to the impact the expenses, I have seen this in other expenses, or other operating expenses. It is likely they know where they want to this impact.

I can’t imagine doing this with rates in HE. You would need a rate for each entity potentially.

Contributed by:
Peter Fugere, Practice Director
HFM & HE Hyperion Certified
Ranzal & Associates

What is PVA and VAL? And How should I do translation per GAPP in HFM?

There are 2 standard methods of translating an account, PVA and VAL.


  • This method retrieves the periodic value for an account and multiplies that by the exchange rate for that period to obtain the translated amount for the current period. Then the translated amount for the current period is added to the prior period’s year-to-date (YTD) translated balance in order to calculate the YTD translated amount. The PVA method is usually applied to Income, Expense, and Flow type accounts.
  • VAL:

  • This method retrieves the YTD value in an account and multiplies that by the exchange rate. The VAL method is usually applied to Asset, Liability, and Balance type accounts.
  • An example of PVA and VAL is as follows:
















    (YTD Amounts)





    (Translated Amt
    After Consol)

    The PVA method takes the periodic value in the INCOME ACCT and divides that by the exchange rate for that period. It then adds this result to the translated value from the prior period. It does not change the exchange rate amount or look to the prior period for the exchange rate at all. In the example above the calculations are as follows:

    JAN 100 / 0.5 = 200
    FEB (300 – 100) = 200 / 0.6 = 333 + 200 (JAN) = 533
    MAR (600 – 300) = 300 / 0.7 = 429 + 533 = 962
















    (YTD Amounts)





    (Translated Amt
    After Consol)

    Using the VAL method instead of PVA, the results would be:
    JAN 100 / 0.5 = 200
    FEB 300 / 0.6 = 500
    MAR 600 / 0.7 = 857

    Contributed by:
    Peter Fugere, Practice Director
    HFM & HE Hyperion Certified
    Ranzal & Associates

    Common Essbase & Planning Problems & Resolutions: The Essbase Add-In

    Over the last 11 years of consulting, there are certain issues I have seen in the implementation of Planning and Essbase applications that are so common, that I have decided to list them here and their resolution in the hopes of sparing you all some undue pain. Today we will focus on a set of issues that probably affects the greatest number of users – the Essbase Add-In. Personally, I think the native Excel Essbase-Add in is still one of the most compelling features to Essbase – which is probably why there hasn’t been a mass migration of older Essbase clients to SmartView (yet). In the words of Mr. Heston – “Not until they pry it from my cold dead hands…”. I sort of feel the same way – so with that in mind, here we go!

    Problem # 1 – More than 1 Instance of Excel Open
    The Essbase Add-In technically only supports 1 instance of Excel open at a given point in time. If you have ever had issues with retrieving data, but not having anything come back, there is a good shot that this is the cause. As a matter of fact, if anyone ever tells me they have ANY problem with the add-in, this is the first thing I tell them to look at.
    Since this is kind of tough to tell given how workbooks appear as separate “programs” in the taskbar now of your desktop, the clearest thing to do is to perform a <CTRL>-<ALT>-<DEL>, display Task Manager, click the processes tab, and do a sort in ascending order by process name. If you see more than 1 Excel.exe process listed, while it may appear that the add-in is working, you will get flaky results (for example, retrieving data and nothing returning). The solutions – terminate one of the excel processes.

    Problem # 2 – Conflicts with Other Add-Ins
    What other add-ins do you need besides Essbase? (Rhetorical question) Well, how about SmartView, or Peoplesoft’s Excel add-in for reporting? Even a Hyperion Enterprise Retrieve…I’ve had success making the various add-ins work together, but some functionality (such as double clicking) will cause conflicts with each other. To resolve, you can certainly load and unload the various add-ins as you need them, or on the Essbase side, there is a great option made available in the 7.x release called “Limit to Connected sheets” – it basically prevents Essbase from taking your mouse buttons until you manually connect a spreadsheet. Note this also has the added benefit of giving Excel users their native Double Click to Edit Cell, and right click pick from list functionality back. An always on setting for me…

    Conflicts with Other Add-Ins

    Problem # 3 – Where did my add-in go?
    This next problem I am bringing up only because it’s personally happened to me so many times. I call this the “I changed my mind” problem. What happens is the following:

    • You are performing an ad-hoc analysis in a spreadsheet against Essbase
    • You are done, and click X to close out Excel
    • You are prompted to Save Changes for any open workbooks, and instead of saying Yes or No, you click Cancel, which stops the closing of Excel
    • You are no longer able to run retrieval code or any add-in function

    What happened? Clicking Excel unloads the add-in immediately, despite the fact on exiting the workbook, you have changed your mind

    I have also seen this in a # templates with VBA code – generally if you starting getting negative numbers as return codes, this is an indication of that problem – the ad-in is loaded.

    The solution is to goto your Tools->Add-Ins menu, deselect the add-in, and close, then go back to your Tools->Add-In menu, and reselect the Add-in, which will load it back up.

    Where did my add-in go

    Problem # 4 – Why are my retrievals so slow?
    Recently at ODTUG Kaleidoscope in New Orleans I gave a presentation on optimizing Essbase retrievals – I got a lot of great feedback from people, specifically related to a number of the tips not being your standard Essbase Common knowledge (you know, your classic IF on dense, FIX on sparse, which isn’t always true anyways). In any event, overall retrieval time is a combination of the server performance, the network performance, and the client performance. Assuming you are an end user, there isn’t much you can do about network or server performance, but there is a lot you can do about your own template configuration on the client, specifically including the following:

    • TIP – Minimize the Usage of EssCell

      I have seen former Enterprise users and even HFM users who work with the Essbase add-in fall in love with this excel function that retrieves data into an individual cell in an Essbase grid. While OK for 1 or two values, each usage of this function executes a separate call to the Essbase server. You are better off doing a hidden retrieval sheet with one retrieval from Essbase, and then referring to the values from that spreadsheet.


    • TIP – Minimize the Usage of Preserve Formulas

      When you preserve formulas on an Essbase retrieval, Essbase needs to check the value of each cell in the grid to see if a formula is there before retrieving it. On particularly large retrievals, this takes additional time.

    Minimize Usage of Preserve Formulas

    • An alternative technique is to make use of locking/unlocking cells, and protecting the worksheet. In the attached retrieval sample from Sample:Basic below, % of Sales is a calculated column with an Excel formula.


    • To “preserve formulas” without using the Essbase feature, the steps you would take would be as follows:

    Step 1 – Select all cells in the grid, and unlock the cells

    All cells in a spreadsheet by default are tagged as locked.

    Step 2 – Highlight the Columns or Rows with Formulas, and Only Lock Those

    Step 3 – Protect the Worksheet

    Now, all of the cells you want Essbase to retrieve into will be able to overwrite the values since the cells are not locked, and the cells with formulas will be protected.


    Note that for a retrieval this size, the above technique is really not necessarily, but particularly if you are using the cascade function, or doing batch workbook retrievals of spreadsheets with hundreds of rows, even a 4-5 second improvement is worthwhile, particularly when you amplify that over the course of multiple retrievals.

    • TIP – Minimize the Usage of “Unknown Members” in Retrieval RangesIn the example above, I added a calculated member called % of Sales. This is not an Essbase member, and as a result, Essbase will let me know about it.

    I can work around this annoying message, but I turning off the option for “Display Unknown Members” in the Essbase Options dialog box.

    Now when I run my retrieval, I no longer get a message displayed on my client workstation. Problem solved! Well, it is for me, but all of those messages are still occurring, they just aren’t being written the client workstation. They still appear in the Essbase server log.

    Imagine a spreadsheet with hundreds of labels that don’t belong – that’s 100 additional lines on every retrieval where the Essbase server is writing really useful messages to the log file instead of processing your query. For something this small, it’s not an issue, but on larger retrievals, I have seen this have an affect of a few seconds per retrieval.

    An option to address this issue is to make use of labels in the Excel grids.

    Now when the retrieval runs, because the value isn’t stored in a cell, problem solved on both client and server. This also has the added benefit of allowing you to keep “Display Unknown Members” on so in the event something happens that you do care about, say a member name getting renamed and blowing up your retrieval, you will be aware of it.

    • TIP – Turn off Enable Flashback

      If you are doing cascade retrieves, or VBA batch retrievals in the Essbase add-in, this setting is unnecessary because it will use memory to store the previous operation only, which won’t necessarily allow you go to back more than 1 sheet. In really early versions of the add-in, we saw lots of issues with both this and preserve formulas on.

    Closing Thoughts…

    In the next installment of this series, I will spend more time focusing on the # 1 thing you can do to improve retrieval performance, specifically changing the orientation of your retrievals themselves, and selective usage of attribute dimensions. That should be a blog by itself!

    Mike Killeen – Ranzal and Associates

    For more than eleven years, Mike Killeen has scoped, designed, and implemented Planning & Essbase applications for Ranzal & Associates, an award winning, preferred consulting partner of Oracle Hyperion. Currently serving as practice director, Mike provides guidance to Hyperion product management, Ranzal customers, and internal consulting resources alike on best practices related to the implementation of Hyperion System 9. Mike is a certified Hyperion Professional, with certifications in Essbase, Planning, and HFM. He graduated cum laude from Cornell University with a degree in engineering, and received his MBA from the University of Connecticut.