The Data Governance Triple Crown

A few weeks ago, those who follow horse racing witnessed a historic event. The race horse Justified captured the Triple Crown by winning the Belmont Stakes following earlier victories in the Kentucky Derby and Preakness Stakes. Justified became only the 13th horse in history to capture the Triple Crown, and the second horse to do so in the last 4 years (American Pharoah captured the honor in 2015). Interesting side note: both Justified and American Pharoah were trained by Bob Baffert. Why does that matter? Because he’s a fellow Arizonan native and University of Arizona alumnus, that’s why! Bear Down!

While it may be a stretch, the concept of a “triple crown” of sorts has been on my mind recently as it relates to recent Oracle Enterprise Performance Management (EPM) projects I’ve been working on involving Oracle Data Relationship Management (DRM) and Data Relationship Governance (DRG). Many people are familiar with the DRG module of the DRM product, but when the tool is coupled with two other critical components, you are well on your way to capturing the Data Governance Triple Crown.

1.    Tool – Data Relationship Governance

As you may know, DRG is a module of the DRM product and provides a governance framework for maintaining your DRM master data. DRG includes functionality such as workflows, approvals, email notifications, and separation of duties (to prevent someone from approving his own request). Workflows are often structured around dimension maintenance and may include requests like “Add Account,” “Update Account,” or “Move Account.” The workflow then guides the requester to select tasks and complete fields on a data entry form. Once submitted, the request enters optional enrichment stages where additional detail and context is added to the request before finally being committed and updating the relevant DRM structures.

Here are just a few of the key features in DRG:

  • Requests can be entered interactively or via bulk upload files
  • Documents (such as supporting request documentation, emails, or policies) can be attached to requests
  • Comments/supporting narrative can be included
  • Requests can be pushed back to a prior stage, approved, or rejected
  • Request can generate email notifications to approvers and/or participants in a workflow requests
  • Requests can include validations, calculated fields, and conditional criteria to enter or bypass specific stages in the workflow

While I could go on and on about DRG, I’ve noticed a DRG implementation is most effective when paired with two other components.

2.    Process – Data Governance Program

In my experience, DRG implementations are most successful when bundled into a broader data governance program. Data governance programs bring together the Tool (DRG), the People (data stewards, data specialists, data governance council), and the Process (process flows, metrics, and standards).

Key facets to an effective data governance program include:

  • Executive sponsorship
  • Data Governance Council
  • Clear Roles and Responsibilities
  • Standards (metrics, definitions, process flows)
  • Authority and Accountability

Data governance programs are not easy! The change management aspect to implementing effective data governance cannot be underestimated. There will be natural resistance, pushback, and challenges to any type of change, and data governance initiatives are no exception. Data governance implementations require patience and perseverance, and at times, even a bit of the “carrot and stick” approach. As a result, we have seen the following steps as crucial to getting your data governance program off the ground:

    1. Define Charter Team and Responsibilities
    2. Define the Mission Statement
    3. Define the High-Level Scope
    4. Define the Terminology and Standards
    5. Define the Current State Overview
    6. Define the Future State Vision
    7. Define the Draft Phased Approach
    8. Prepare the Project Charter
    9. Present the Project Charter for Executive Approval
    10. Ensure Executive Support

While there is much more content to dive into on a data governance program that is beyond the scope of this blog, I hope you appreciate the importance of People and Process in a data governance initiative and do not focus only on the Tool.

3.    Integration – DRM to External Systems

The third and final component to effective data governance, after the Tool and Process, is integration to external systems. This allows DRM to truly become the master data hub in your company’s eco-system and systematically push master data (which could include trees/hierarchies, base members, mappings, or all of the above) to both upstream and downstream systems.

By leveraging DRM’s robust integration capabilities and adding in some custom SQL or ETL integration as needed, DRM can produce master data in various forms (flat files, SQL tables, web services, external commits) for consumption by external applications. And these integrations can be run on-demand or scheduled.

Summary

So there you have it. Three critical components to effective data governance: a good tool (DRG), a robust process (data governance program), and automated integration (with DRM as the hub).

Are any of these components effective in their own right? Certainly. Each area adds value in its own right and can be implemented standalone. But when all three components are implemented in conjunction, the whole is definitely greater than the sum of the parts. Each component presents its own set of challenges and requires close collaboration with both technical and business personnel at a customer. And executive sponsorship and buy-in is absolutely vital to managing and overcoming the inevitable change management challenges. It ain’t easy, but like the saying goes, nothing worthwhile ever is, right?

I’d love to hear your thoughts on this topic along with any best practices, lessons learned, or battle scars earned along the way. Feel free to connect with me on LinkedIn or Twitter.

Cloud Data Management (CDM) and Financial Data Quality Management Enterprise Edition (FDMEE): A Case Study in Working Together

Why buy Financial Data Quality Management Enterprise Edition (FDMEE) when Cloud Data Management (CDM) is free?  As outlined in my recent white paper – FDMEE vs. Cloud Data Management – there are myriad factors that can drive the decision.  This blog post highlights how one customer gained a highly flexible and automated solution for data and master data management with an on-premise deployment of FDMEE in conjunction with Cloud Data Management.

This customer adopted a pure Cloud strategy as it relates to Enterprise Performance Management (EPM) procuring subscriptions to Planning and Budgeting Cloud Service (PBCS), Financial Close & Consolidation Cloud Service (FCCS), and Account Reconciliation Cloud Service (ARCS).  A diverse business, the customer has many unique operational systems with varying formats and charts of accounts.  So far, no reason why Cloud Data Management (CDM) can’t handle this requirement, right?  This is what CDM does – uses import formats and maps to consume and transform data – right?  Sure, but with caveats.  Notice that I used the word consume and not extract.  CDM does not provide the ability to link with on-premise systems to extract data.  Additionally, flat file data extracts that lack a consistent structure often cannot be natively consumed by CDM.

In this case, data needs to be loaded each day from numerous sources to support daily operational reporting.  The systems are a blend of on-premise, hosted, and Cloud applications.  The customer requirement dictated that any on-premise system should be connected directly to eliminate the need for a flat file extract to be generated daily.  Additionally, the hosted and Cloud applications are very industry specific and, in some cases, provided by very niche vendors.  The ability to modify extract formats was cost prohibitive or simply not supported.  As a result, several of these data feeds were not consumable by CDM without preprocessing/modification.

In light of the above requirements, the customer procured and deployed FDMEE on-premise.  The power of FDMEE allows a solution to be deployed that provides a direct connection to multiple on-premise systems as well as consume the flat file extracts from hosted and Cloud applications including Excel files (not in the required FDMEE/CDM format) and XML.  Because FDMEE on-premise supports scripting, we were able to greatly enrich the data integration cycle with full end-to-end automation including FTP downloading of hosted data, enhancement of the data integration cycle to detect data mapped to members not yet in PBCS or FCCS, dynamically setting substitution variables based on the processing day, running calculations in PBCS, and sending email status alerts to outline the success or failure of a data load cycle.

Although I am a huge FDMEE advocate, I recognize the value of Cloud Data Management and the benefits it provides in a case like this one.  This customer was one of just three participants in the Oracle Enterprise Data Management Cloud Service (EDMCS) program.  This means that they were able to use the software before it was publicly available – otherwise known as GA.  To participate in this program, one must recognize the absence of certain features and functions with the software.  The program allows the customer (and partner) to offer Oracle development and product management valuable input about the software and in some ways drive what features are prioritized within the product roadmap.

EDMCS currently lacks native connections to FCCS, but this will change over time.  So how does CDM help with loading metadata to FCCS?  In a recent update to CDM, Oracle included the ability to import a flat file into CDM and load metadata to a registered target application such as PBCS or FCCS.  John Goodwin gives a detailed overview of the technical setup.

FDMEE and CDM have come together in this case to provide a fully automated data integration process and an automated master data integration process.  Within EDMCS, a Custom application type was created.  The required properties for FCCS were built and attached to the multiple dimensions being mastered, and flat file exports were generated for FCCS.  We knew we were going to use CDM to manage the master data load process, but we had a decision to make – do we leverage EPM Automate or FDMEE as our automation hub?

We chose FDMEE.  Why?  Simply because a lot of automation assets had already been developed in FDMEE that could readily be reused for this process including execution of EPM Automate commands, a framework for leveraging the REST API (for PBCS and FCCS), and email alerting.  Additionally, we found the capabilities of EPM Automate to be somewhat limited.

For example, when you execute a CDM data load rule from EPM Automate, the process ID associated with the execution is not returned.  Why is that important?  Because in the event of a failure, I’d want to download the process log and attach it to the email so the user has information to address the issue.  Could I use the ListFiles command of EPM Automate to get the process log? Possibly, but it doesn’t account for potential concurrency, and I am not doing my job as a consultant if I build a process that can’t handle concurrent operations.  For reasons such as these, we leveraged EPM Automate when possible and the REST API as needed, and we wrapped it all together with an FDMEE process that could be executed on a scheduled basis or on demand simply by using the Script Execution functionality.

Let’s review the end-to-end solution.  In EDMCS, metadata is maintained for PBCS and FCCS.  The metadata is extracted to a flat file (.csv) after maintenance is completed and saved to a network folder.  From FDMEE, the master data integration process is initiated to upload the metadata files to FCCS and PBCS.  Cloud Data Management data load rules are initialized to process the metadata extracts.  In the event of an error, the CDM process log is downloaded.  Finally, an email is generated to alert the administrator of the data integration process status.

There you have it – EDMCS, FDMEE, and CDM working in concert to provide a seamless and elegant solution to data and master data integration for a customer that adopted a Cloud EPM strategy.  If you want to learn how you can enhance your Oracle EPM integration processes, contact us and we’ll be happy to discuss your options.

Data Governance in the Cloud: An Integrated Strategy; A Unified Solution

Are you tasked with making organizational decisions that have placed you in a major dilemma? As a decision-maker in today’s fast-paced economy, you must wonder how you can cut costs, improve the bottom line, and still maintain the data quality necessary to make strategic decisions.

Take heart because it IS possible to achieve a balance of on-premise and off-premise Enterprise Performance Management (EPM) software while maintaining integrity and control of your data to provide the quality and data assurance needed for success – AND benefit financially from new Cloud technologies.

Success is a combination of understanding what each data tract requires and creating an integration strategy consisting of the necessary business processes and software tools that deliver consistency and integrity of your EPM strategic data.

Past trends called for a tight on-premise coupling of all EPM software to achieve the best results. This strategy required maintenance of a large hardware and software infrastructure and related personnel to keep everything running smoothly.  The new Cloud “POD” subscriptions are geared toward reducing the high costs of infrastructure which is a financial benefit. As in all things in life, there is a consequence of moving to Cloud technology.   An unexpected consequence of Pod technology is the creation of isolated silos of information, but there is an easy resolution!  The key to overcoming this limitation is to gain an understanding of what each component offers and demands, and creating an integration strategy to bridge that gap.

If you are interested in learning how to create this strategy to bring the various pieces together as a unified solution or if your organization plans to migrate to the EPM Cloud platform in the future, this whitepaper helps to define a process to pre-build the integration strategy and make moving to the Cloud easier with reduced time to migrate.

Download our whitepaper: Data Relationship Management (DRM) for Cloud-Based Technologies:  Using DRM for Data Governance in the Cloud

Announcing PowerDrill for Oracle EID 3.1

If you had distill what we at Ranzal’s Big Data Practice do down to its essence, it’s to use technology to make accessing and managing your data more intuitive, more useful.  Often this takes the form of data modeling and integration, data visualization or advice in picking the right technology for the problem at hand.

Sometimes, it’s a lot simpler than that.  Sometimes, it’s just giving users a shortcut or an easy way to do more with the tools they have.  Our latest offering, the PowerDrill for Oracle Endeca Information Discovery 3.1, is the quintessential example of this.

When dealing with large and diverse quantities of data, Oracle Endeca Studio is great for a lot of operations.  It enables open text search, it has data visualization, it enriches data, it surfaces all in-context attributes for slicing and dicing and it helps you find answers both high-level, say “Sales by Region”, and low, like “My best/worst performing product”.  But what about the middle ground?

For example, on our demo site, we have an application that allows users to explore publicly available data related to Parks and Recreation facilities in Chicago.  I’m able to navigate through the data, filter by the types of facilities available (Pools, Basketball Courts, Mini Golf, etc.), see locations on a map, pretty basic exploration.

The Parks of Chicago

The Parks of Chicago

Now, let’s say I’m looking for parks that fit a certain set of criteria.  For example, let’s say I’m looking to organize a 3-on-3 basketball tournament somewhere in the city.  I can use my discovery application to very easily find parks that have at least 2 basketball courts.

Navigate By Courts

Navigate By Courts


This leaves me with 80 potential parks that might be a candidate for my tournament.  But let’s say I live in the suburbs and I’m not all that familiar with the different neighborhoods of Chicago.  Wouldn’t it be great to use other data sets to quickly explore the areas surrounding these parks quickly and easily?  Enter the Power Drill. Continue reading

What You Can Do…

Last week, we announced general availability of our Advanced Visualization Framework (AVF) for Oracle Endeca Information Discovery.  We’ve received a lot of great feedback and we’re excited to see what our customers and partners can create and discover in a matter of days. Because the AVF is a framework, we’ve already gotten some questions and wanted to address some uncertainty around “what’s in the box”.  For example: Is it really that easy? What capabilities does it have? What are the out of the box visualizations I get with the framework?

Ease of Use

If you haven’t already registered and downloaded some of the documentation and cookbook, I’d encourage you to do so.  When we demoed the first version of the AVF at the Rittman Mead BI Forum in Atlanta this spring, we wrapped up the presentation with a simple “file diff” of a Ranzal AVF visualization.  It compared our AVF JavaScript and the corresponding “gallery entry” from the D3 site that we based it on.  In addition to allowing us to plug one of our favorite utilities (Beyond Compare 3), it illustrated just how little code you need to change to inject powerful JavaScript into the AVF and into OEID.

Capabilities

Talking about the framework is great, but the clearest way to show the capabilities of the AVG is by example.  So, let’s take a deep dive into two of the visualizations we’ve been working on this week.  First up, and it’s a mouthful, is our “micro-choropleth”. We started with a location-specific Choropleth (follow the link for a textbook definition) centered around the City of Chicago.  Using the multitude of publicly available shape files for Chicago, the gist of this visualization is to display some publicly available data at a micro-level, in this case crime statistics at a “Neighborhood” level: It’s completely interactive, reacts to guided navigation, gives contextual information when you mouse over and even gives you the details about individual events (i.e. crimes) when you click in. Great stuff but what if I don’t want to know about crime in Chicago?  What if I want to track average length of stay in my hospital by where my patients reside?   Similar data, same concept, how can I transition this concept easily?  Well, our micro-choropleth has two key capabilities, both enabled by the framework, to account for this.  Not only does it allow my visualization to contain a number of different shape layers by default (JavaScript objects for USA state-by-state, USA states and counties, etc.), it also gives you the ability to add additional ones via Studio (no XML, no code). Once I’ve added the new JavaScript file containing the data shape, I can simply set some configuration to load this totally different geographic data frame rather than Chicago.  I can then switch my geographic configuration (all enabled in my visualization’s definition) to indicate that I’ll be using zip codes rather than Chicago neighborhoods for my shapes. Note that our health care data and medical notes are real but we de-identify the data, leaving our “public data” at the zip code level of granularity.  From there, I simply change my query to hit population health data and calculate a different metric (length of stay in Days) and I’m done! That’s a pretty “wholesale” change that just got knocked out in a matter of minutes.  It’s even easier to make small tweaks.  For example, notice there are areas of “white” in my map that can look a little washed out.  These are areas (such as the U.S. Naval Observatory) that have zip codes but lack any permanent residents.  To increase the sharpness of my map, maybe I want to flip the line colors to black.  I can go into the Preferences area and edit CSS to my heart’s content.  In this case, I’ll flip the border class to “black” right through Studio (again, no cracking open the code)… …and see the changes occur right away. The same form factor is valid for other visualizations that we’ve been working on.  The following visualization leverages a D3 force layout to show a Node-Link analysis between NFL skill position players (it’s Fantasy Football season!) and the things they share in common (College attended, Draft Year, Draft Round, etc.).  Below, I’ve narrowed down my data (approximately 10 years worth) by selecting some of the traditional powers in the SEC East and limiting to active players. This is an example of one of our “template visualizations”.  It shows you relationships, interesting information but really is intended to show what you can do with your data.  I don’t think the visualization below will help you win your fantasy league though it may help you answer a trivia question or two.

However, the true value is in realizing how this can be used in real data scenarios.  For example, picture a network of data related to intelligence gathering.  I can visualize people, say known terrorists, and organizations they are affiliated with.  From there, I can see others who may be affiliated with those organizations in a variety of ways (family relations, telephone calls, emails).  The visualization is interactive, it lends itself to exploration through panning, scanning and re-centering.  It can show all available detail about a given entity or relationship and provide focused detail when things get to be a bit of a jumble: And again, the key is configuration and flexibility over coding.  The icons for each college are present on my web server but are driven entirely by the data, and retrieved and rendered using the framework.  The color and behavior of my circles is configurable via CSS.

What’s In The Box?

So, you’re seeing some of the great stuff we’ve been building inside our AVF.  Some of the visualizations are still in progress, some of them are “proof of concept” but a lot of it is already packaged up and included. We ship with visualizations for Box Plots, Donut Charts, Animated Timeline (aka Health and Wealth of Nations), and our Tree Map.  In addition, we ship with almost a dozen code samples for other use cases that can give you a jump start on what you’re trying to create. This includes a US Choropleth (States and Counties), a number of hierarchical and parent-child discovery visualizations as well as a Sunburst chart. In addition, we’ll be “refreshing the library” on a monthly basis with new visualizations and updates to existing ones.  These updates might be as simple as demonstrations of best practices and design patterns to fully fledged supported visualizations built by the Engineering team here in Chicago.  Our customers and partners who are using the framework can expect an update on that front around the first of the month.

As always, feedback and questions welcome at product [at] ranzal.com.

Leveraging Your Organization’s OBI Investment for Data Discovery

Coupling disparate data sets into meaningful “mashups” is a powerful way to test new hypotheses and ask new questions of your organization’s data.  However, more often than not, the most valuable data in your organization has already been transformed and warehoused by IT in order to support the analytics needed to run the business.  Tools that neglect these IT-managed silos don’t allow your organization to tell the most accurate story possible when pursuing their discovery initiatives.  Data discovery should not focus only on the new varieties of data that exist outside your data warehouse.  The value from social media data and machine generated data cannot be fully realized until it can be paired with the transactional data your organization already stockpiles.

Judging by the heavy investment in a new “self-service” theme in the recently released version 3.1 of Endeca Information Discovery, this truth has not been lost on Oracle.

Companies that are eager to get into the data discovery game, yet are afraid to walk away from the time and effort they’ve poured into their OBI solution, can breathe a little easier.  Oracle has made the proper strides in the Endeca product to incorporate OBI into the discovery experience.

And unlike other discovery products on the market today, the access to these IT-managed repositories (like OBI) is centrally managed.  By controlling access to the data and keeping all data “on the platform”, this centralized management allows IT to avoid the common “spreadmart” problem that plagues other discovery products.

Rather than explain how OBI has been introduced into the discovery experience, I figured I would show you.  Check out this short 4 minute demonstration which illustrates how your organization can build their own data “mashups” leveraging the valuable data tied up in OBI.

 

 

Chances are that a handful of these tested hypotheses will unlock new ways to measure your business.  These new data mashups will warrant permanent applications that are made available to larger audiences within your organization.  The need for more permanent applications will require IT to “operationalize” your discovery application — introducing data updates, security, and properly sized hardware to support the application.

For these IT-provisioned applications, Oracle has also provided some tooling in Endeca to make the job more straightforward.  Specifically, when it comes to OBI, the product now boasts a wizard that will produce a Integrator project with all of the plumbing necessary to pull data tied up in OBI into a discovery application in minutes.  Check out this video to see how:

 

 

It is product investments like these that will allow organizations to realize the transformative effects data discovery can have on their business without having to ignore the substantial BI investments already in place.

As always, please direct any questions or comments to [at] ranzal.com.

Installment 2: Under the Hood of the EBS Endeca Integration

In my first post in the series, I promised to return with more in-depth goodness about the Endeca extensions for EBS.  The more I thought to write about new topics, the more I was inclined to show the offering first hand.

Thus, here we are.  I have opted to show you how the integration works “under the hood”, instead of boring you with my words.  This first screencast is still somewhat high-level, but aims to help illustrate how the two applications, Endeca and EBS, work together.

In future screencasts, I plan to dive deeper into the integration specifics on around data, UI and configuration.  I also plan to show how the out-of-the-box configuration can be tweaked to help maximize the value of the offering.

 

 

OEID 3.0 First Look — Text Enrichment & Whitespace

I recently spent some cycles building my first POC for a potential customer with OEID v3.0.  After running some of the unstructured data through the text enrichment component, I noticed something odd:

whitespace_prob

The charts I configured to group by those salient terms were displaying a “null” bucket.  This bucket was essentially collecting all records that were not tagged with a term.  After a bit of investigation, it seems this is expected behavior in v3.0 — the Endeca Server now treats empty, yet non-null attributes, as valid and houses them on the Endeca record.  Empty, yet non-null, attributes are common after employing some of the OOTB text enrichment capabilities in 3.0 (tagging, extraction, regex).  Thus, a best practice treatment for this side-effect is warranted.

The good news is that the workaround was very straightforward.

1) Add a “Reformatter” component to the .grf before the bulk loader with the same input and output metadata edge definition.  From the reformatter “Source” tab, select “Java Transform Wizard” and give your new transformation class a name like “removeWhitespaces”.  This will create a .java source file and a compiled .class file in your Integrator project’s ./trans directory (where Integrator expects your java source code to reside).

removeWhitespace

2) Provide the following java logic in your new “removeWhitespaces” transformation class:
import org.jetel.component.DataRecordTransform;
import org.jetel.data.DataRecord;
import org.jetel.exception.TransformException;
import org.jetel.metadata.DataFieldType;

public class removeWhitespaces extends DataRecordTransform {

@Override
public int transform(DataRecord[] arg0, DataRecord[] arg1) throws TransformException {
for(int i = 0; i < arg0.length; i++) {
DataRecord rec = arg0[i];
for(int j = 0; j < rec.getNumFields(); j++) {
if(rec.getField(j).getMetadata().getDataType().equals(DataFieldType.STRING)) {
if(rec.getField(j).getValue() == null || rec.getField(j).getValue().equals(“”) || rec.getField(j).getValue().toString().length() == 0) {
rec.getField(j).setValue(null);
}
}
arg1[i].getField(j).setValue(rec.getField(j).getValue());
}
}
return 0;
}
}

3) Make sure the name of this new class is specified in the “Transform class” input.  Rerun the .grf that loads your data and….profit!

whitespace_fix

We look forward to sharing more emerging OEID v3.0 best practices here….and hearing about your approaches as well.

 

 

Installment 1: E-Business Suite + Endeca Applications, A Product Marriage

E-Business Suite is dating whom?

Through subtle release announcements and YouTube teasers, Oracle is slowly starting to broadcast its latest E-Business Suite offering to the market.  With this new offering, officially called “E-Business Suite Endeca Applications”, Oracle is making a push to address one of its users’ most common complaints: its too difficult to access, understand and analyze information in E-Business Suite.

But before I delve into the value of the offering and the subtle details of the integration, it probably makes sense to frame this relationship a bit for our readers.  After all, most of our readers are Endeca-enthusiasts first and may be confused why Ranzal is qualified to provide its commentary on EBS to begin with.

Sparks flew.

Last year at this time, after Oracle’s acquisition of Endeca, I found myself in Oracle Endeca’s product management group on a team tasked with identifying “fit” and “compliment” for the Endeca product elsewhere in Oracle’s stack.  My team quickly identified several products and industry-specific solutions that would’ve benefited from incorporating Endeca’s hybrid search and analytical database.  However, one product, above all others, seemed most likely to benefit from such an integration.  Like the foundation of many relationships, one product proved to have its greatest shortcoming bolstered by the other’s greatest strength.  As you’ve probably already guessed, that product was “EBS” and that shortcoming was “information access”.

While leading this product integration effort at Oracle, I found myself in awe of the wealth of tightly coupled “run-the-business” functions offered across EBS’s 20 “pillars”.  EBS is built on a home-grown “OA-framework” that allows for transaction- and workflow-based business functions.  What quickly became clear to me was that this purpose-built framework was excellent at delivering business functions (ERP, CRM, SCM, Procurement, you name it) in a repeatable and reliable fashion, but it fundamentally overlooked its users need to access that same transactional information in the aggregate.  For years EBS attempted to address this shortcoming by creating advanced search screens and clever menu “hops”, but it never seemed to tackle the issue head on.

I now pronounce you EBS and Endeca.

screenshot1

What started to take shape over my next 8 months at Oracle was the marriage of two products.  Unlike most marriages, however, this one was loosely coupled and non-intrusive.  While this may not work in life, it certainly makes for simpler, quicker and more cost-effective deployments.  Through additional installments, I plan to disclose the intimate details of this product marriage, the “gotchas” that could arise during deployment, and why Ranzal is uniquely positioned to help your company integrate this offering.

As with every marriage, there is dirt to be shared, but I am pleased to say I truly believe in this case the whole is greater than the sum of its parts.

More installments to come…

As always, please direct any questions or comments to ranzal.com.