Big Data Discovery – Custom Java Transformations Part 2

In a previous post, we walked through how to implement a custom Java transformation in Oracle Big Data Discovery.  While that post was more technical in nature, this follow up post will highlight a detailed use case for the transformations and illustrate how they can be used to augment an existing dataset.

Example: 2015 Chicago Mayoral Election Runoff

For this example we will be using data from the 2015 Mayoral Election Runoff in Chicago.  Incumbent Rahm Emanuel defeated challenger “Chuy” Garcia with 55.7% of the popular vote.  Results data from the election were compiled and matched up with Chicago communities, which were then subdivided by zip code.  A small sample of the data can be seen below:

Sample election data

Sample election data

In its original state, the data already offers some insight into the results of the election, but only at a high level.  By utilizing the custom transformations, it is possible to bring in additional data and find answers to more detailed questions.  For example, what impact did the wealth of Chicago’s communities have on their selection of a candidate?

One indicator of wealth within a community is the median sale price of homes in that area.  Naturally, the higher the price of homes, the wealthier the community tends to be.  Zillow provides an API that allows users to query for a variety of real estate and demographic information, including median sale price of homes.  Through the custom transformations, we can augment the existing election results with the data available through the API.

The structure of the custom transformation is exactly the same as the ‘Hello World’ example from our previous post.  The transformation is initiated in the BDD Custom Transform Editor with the command runExternalPlugin('ZillowAPI.groovy',zip). In this case, the custom groovy script is called ZillowAPI.groovy and the field being passed to the script is the zip code, zip.

The script then uses the zip to construct a string and convert it to the URL required to make the API call:

def zip = args[0]
String url = "http://www.zillow.com/webservice/GetDemographics.htm?zws-id=<ZILLOW_API_KEY>&zip=" + zip;
URL api_url = new URL(url);

Once the transform script completes, the median_sale_price field is now accessible in BDD:

Updated data in BDD

Updated data in BDD

Now that the additional data is available, we can use it to build some visualizations to help answer the question posed earlier.

Median Sale Price by Chicago Community - Created using the Ranzal Data Visualization Portlet*

Median Sale Price by Chicago Community – Created using the Ranzal Data Visualization Portlet*

Percentage for Chuy by Community - Created using the Ranzal Data Visualization Portlet*

Percentage for Chuy by Community – Created using the Ranzal Data Visualization Portlet*

The two choropleths above show the median sale price by community and the percentage of votes for “Chuy” by community.  Communities in the northeastern sections of the city seem to have the highest concentration of median sale price, while communities in the western and southern sections tend to have lower prices.  For median sale price to be a strong indicator of how the communities voted, the map displaying votes for “Chuy” should show a similar pattern, with the communities grouped by northeast and southwest.  However, the pattern is noticeably different, with votes for “Chuy” distributed across all sections of the map.

Bar-Line chart of Median Sale Price and Percent for Chuy

Bar-Line chart of Median Sale Price and Percent for Chuy

Looking at the median sale price in conjunction with the percentage of votes for “Chuy” provides an even clearer picture.  The bars in the chart above represent the median sale price of homes, and are sorted in descending order from left to right.  The line graph represents the percentage of votes for “Chuy” in each community.  If there was a connection between median sale price and the percentage of votes for “Chuy”, we’d expect to see the line graph either increase or decrease as sale price decreases.  However, the percentage of votes varies widely from community to community, and doesn’t seem to follow an obvious pattern in relation to median sale price.  This corresponds with the observations from the two choropleths above.

While these findings don’t provide a definitive answer to the initial question as to whether community wealth was a factor in the election results, they do suggest that median sale price is not a good indicator of how Chicago communities voted in the election.  More importantly, this example illustrates how easy it is to utilize custom Java transformations in BDD to answer detailed questions and get more out of your original dataset.

If you would like to learn more about Oracle Big Data Discovery and how it can help your organization, please contact us at info [at] ranzal.com or share your questions and comments with us below.


* – The Ranzal Data Visualization Portlet is a custom portlet developed by Ranzal and is not available out of the box in BDD.  If you would like more information on the portlet and it’s capabilities, please contact us and stay tuned for a future blog post that will cover the portlet in more detail.

Big Data Discovery – Custom Java Transformations Part 1

In our first post introducing Oracle Big Data Discovery, we highlighted the data transform capabilities of BDD.  The transform editor provides a variety of built in functions for transforming datasets.  While these built in functions are straightforward to use and don’t require any additional configuration, they are also limited to a predefined set of transformations.  Fortunately, for those looking for additional functionality during transform, it is possible to introduce custom transformations that can leverage external Java libraries by implementing a custom Groovy script.  The rest of this post will walk through the implementation of a basic example, and a subsequent post will go in depth with a few real world use cases.

Create a Groovy script

The core component needed to implement a custom transform with external libraries is a Groovy script that defines the pluginExec() method.  Groovy is a programming language developed for the Java platform.  Details and documentation on the language can be found here.  For this basic example, we’ll begin by creating a file called CustomTransform.groovy and define a method, pluginExec(), which should take an object array, args, as an argument:

def pluginExec(Object[] args) {
    String input = args[0] //args[0] is the input field from the BDD Transform Editor 

    //Implement code to transform input in some way
    //The return of this method will be inserted into the transform field

    input.toUpperCase() //This example would return an upper cased version of input
}

pluginExec() will be applied to each record in the BDD dataset, and args[0] corresponds to the field to be transformed.  In the example script above, args[0] is assigned to the variable input and the toUpperCase() method is called on it.  This means that if this custom transformation is applied to a field called name, the value of name for each record will be returned upper cased (For example, “johnathon” => “JOHNATHON”).

Import Custom Java Library

Now that we’ve covered the basics of how the custom Groovy script works, we can augment the script with external Java libraries.  These libraries can be imported and implemented just as they would be in standard Java:

import com.oracle.endeca.transform.HelloWorld
    
def pluginExec(Object[] args) {
    String input = args[0] //Note that though the input variable is defined in this example, it is not used.  Defining input is not required.
    
    HelloWorld hw = new HelloWorld() //Create a new instance of the HelloWorld class defined in the imported library
    hw.testMe() //Call the testMe() method, which returns a string "Hello World"
}

In the example above, the HelloWorld class is imported.  A new instance of HelloWorld is assigned to the variable hw, and the testMe() method is called.  testMe() is designed to simply return the string “Hello World”.  Therefore, the expected output of this custom script is that the string “Hello World” will be inserted for each record in the transformed BDD dataset.  Now that the script has been created, it needs to be packaged up and added to the Spark class path so that it’s accessible during data processing.

Package the Groovy script into a jar

In order to utilize CustomTransform.groovy, it needs to packaged into a .jar file.  It is important that the Groovy script be located at the root of the jar, so make sure that the file is not nested within any directories.  See below for an example of the file structure:

CustomTransform.jar
  |---CustomTransform.groovy
  |---AdditionalFile_1
  |---AdditionalFile_2
  ...
  ...
  .etc

Note that additional files can be included in the jar as well.  These additional files can be referenced in CustomTransform.groovy if desired.  There are multiple ways to pack up the file(s), but the simplest is to use the command line.  Navigate to the directory that contains CustomTransform.groovy and use the following command to package it up:

# jar cf <new_jar_name> <input_file_for_jar>
> jar cf CustomTransform.jar CustomTransform.groovy

Setup a custom lib location in Hadoop

CustomTransform.jar and any additional Java libraries that are imported by the Groovy script need to be added to all spark nodes in your Hadoop cluster.  For simplicity, it is helpful to establish a standard location for all custom libraries that you want Spark to be able to access:

$ mkdir /opt/bdd/edp/lib/custom_lib

The /opt/bdd/edp/lib directory is the default location for the BDD dataprocessing libraries used by Spark.  In this case, we’ve created a subdirectory, custom_lib, that will hold any additional libraries we want Spark to be able to use.

Once the directory has been created, use scp, WinSCP, MobaXterm, or some other utility to upload CustomTransform.jar and any additional libraries used by the Groovy script into the custom_lib directory.  The directory needs to be created on all Spark nodes, and the libraries need to be uploaded to all nodes as well.

Update sparkContext.properties on the BDD Server

The last step that needs to be completed before running the custom transformation is updating the sparkContext.properties file.  This step only needs to be completed the first time you create a custom transformation as long as the location of the custom_lib directory remains constant for each subsequent script.

Navigate to /localdisk/Oracle/Middleware/BDD<version>/dataprocessing/edp_cli/config on the BDD server and open the sparkContext.properties file for editing:

$ cd /localdisk/Oracle/Middleware/BDD1.0/dataprocessing/edp_cli/config
$ vim sparkContext.properties

The file should look something like this:

#########################################################
# Spark additional runtime properties, see
# https://spark.apache.org/docs/1.0.0/configuration.html
# for examples
#########################################################


Add an entry to the file to define the spark.executor.extraClassPath property.  The value for this property should be <path_to_custom_lib_directory>/*.  This will add everything in the custom_lib directory to the Spark class path.  The updated file should look like this:

#########################################################
# Spark additional runtime properties, see
# https://spark.apache.org/docs/1.0.0/configuration.html
# for examples
#########################################################

spark.executor.extraClassPath=/opt/bdd/edp/lib/custom_lib/*

It is important to note, if there is already an entry in sparkContext.properties for the spark.executor.extraClassPath property, any libraries referenced by the property should be moved to the custom_lib directory so they are still included in the Spark class path.

Run the custom transform

Now that the script has been created and added to the Spark class path, everything is in place to run the custom transform in BDD.  To try it out, open the Transform tab in BDD and click on the Show Transformation Editor button.  In this example, we are going to create a new field called custom with the type String:

Create new attribute

Create new attribute

Now in the editor window, we need to reference the custom script:

Transform Editor

Transform Editor

The runExternalPlugin() method is used to reference the custom script.  The first argument is the name of the Groovy script.  Note that the value above is 'CustomTransform.groovy' and not 'CustomTransform.jar'.  The second argument is the field to be passed as an input to the script (this is what gets assigned to args[0] in pluginExec()).  In the case of the “Hello World” example, the input isn’t used, so it doesn’t matter what field is passed here.  However, in the first example that returned an upper cased version of the input field, the script above would return an upper cased version of the key field.

One of the nice features of the built-in transform functions is that they make it possible to preview the transform changes before committing.  With these custom scripts, however, it isn’t possible to see the results of the transform before running the commit.  Clicking preview will just return blank results for all fields, as seen in the example below:

Example of custom transform preview

Example of custom transform preview

The last thing to do is click ‘Add to Script’ and then ‘Commit to Project’ to kick off the transformation process.  Below are the results of the transform.  As expected, a new custom field has been added to the data set and the value “Hello World” has been inserted for every record.

Transform results

Transform results

This tutorial just hints at the possibilities of utilizing custom transformations with Groovy and external Java libraries in BDD.  Stay tuned for the second post on this subject, when we will go into detail with some real world use cases.

If you would like to learn more about Oracle Big Data Discovery and how it can help your organization, please contact us at info [at] ranzal.com or share your questions and comments with us below.

Bringing Data Discovery To Hadoop – Part 2

The most exciting thing about Oracle Big Data Discovery is its integration with all the latest tools in the Hadoop ecosystem. This includes Spark, which is rapidly supplanting MapReduce as the processing paradigm of choice on distributed architectures. BDD also makes clever use of the tried and tested Hive as a metadata layer, meaning it has a stable foundation on which to build its complex data processing operations.

In our first post of this series, we showcased some of BDD’s most handy features, from its streamlined UI to its very flexible data transformation abilities. In this post, we’ll delve a little deeper into BDD’s underlying mechanics and explain why we think the application might be a great solution for Hadoop users.

Hive

Much of the backbone for BDD’s data processing operations lie in Hive, which effectively acts as a robust metastore for BDD. While operations on the data itself are not performed using Hive functions (which currently run on MapReduce), Hive is a great way to store and retrieve information about the data: where it lives, what it looks like, and how it’s formatted.

For organizations that are already running data in Hive, the integration with BDD couldn’t be simpler. The application ships with a data processing tool that can automatically import databases and tables from Hive, all while keeping data types intact. The tool can also sync up with a Hive database so that when new tables are created a user can automatically work with that data in BDD. If a table is dropped, BDD deletes that particular data set from its index. Currently, the 1.0 version doesn’t support updates to existing Hive tables, but we hope to see that feature in an upcoming release.

BDD can also upload data to HDFS and create a new table with that data in Hive to work with. It does this whenever a user uploads a file through the UI. For example, here’s what we saw in Hive with the consumer complaints data set from the last post after BDD imported it:

Example of an auto-generated Hive table by BDD

This easy integration with Hive makes BDD a good option for both experienced Hadoop users who are using Hive already, as well as less technical users.

Spark

While Hive provides a solid foundation for BDD’s operations, Spark is the workhorse. All data processing operations are run through Spark, which allows BDD to analyze and transform data in-memory. This approach effectively sidesteps the launching of slower MapReduce jobs through Hive and gives the processing engine direct access to the data.

When a user commits a series of transforms to a data set via the BDD UI, those transforms are interpreted into a Groovy script that are then passed to Spark through an Oozie job. Here, we can see how some date strings are converted to datetime objects behind the scenes:

Tech021

After Spark has done its handiwork, the data is then written out to HDFS as a new set of files, serialized and compressed in Avro. The original collection stays intact in another location in case we want to go back to it in the future.

From this point, the data is then loaded into the Dgraph.

Dgraph

The Dgraph is basically an in-memory index, and is what enables the real-time, dynamic exploration of data in BDD. This concept might be familiar to those who have used Oracle Endeca Information Discovery, where the Dgraph also played a key role, and this lineage means BDD inherits some very nice features: quick response, keyword search, impromptu querying, and the ability to unify metrics, structured and unstructured data in a single interface. The biggest difference now is that users have the ability to apply these real-time search and analytic capabilities to data sitting on Hadoop.

We think the marriage of this kind of discovery application with Hadoop makes a lot of sense. For starters, Hadoop has enabled organizations to store vast amounts of data cheaply without necessarily knowing everything about its structure and contents. BDD, meanwhile, offers a solution to indexing exactly this kind of data — data that is irregular, inconsistent and varied.

There’s also the issue of access. Currently, most data tools in the Hadoop ecosystem require a moderate level of technical knowledge, meaning wide swaths of an organization might have little to no view of all that data on HDFS. BDD offers a system to connect more people to that data, in a way that’s straightforward and intuitive.

If you would like to learn more about Oracle Big Data Discovery and how it might help your organization, please contact us at info [at] ranzal.com.

Bringing Data Discovery To Hadoop – Part 1

We have been anticipating the intersection of big data with data discovery for quite some time. What exactly that will look like in the coming years is still up for debate, but we think Oracle’s new Big Data Discovery application provides a window into what true discovery on Hadoop might entail.

We’re excited about BDD because it wraps data analysis, transformation, and discovery tools together into a single user interface, all while leveraging the distributed computing horsepower of Hadoop.

BDD’s roots clearly extend from Oracle Endeca Information Discovery, and some of the best aspects of that application — ad-hoc analysis, fast response times, and instructive visualizations — have made it into this new product. But while BDD has inherited a few of OEID’s underpinnings, it’s also a complete overhaul in many ways. OEID users would be hard-pressed to find more than a handful of similarities between Endeca and this new offering. Hence, the completely new name.

The biggest difference of course, is that BDD is designed to run on the hottest data platform in use today: Hadoop. It is also cutting edge in that it utilizes the blazingly fast Apache Spark engine to perform all of its data processing. The result is a very flexible tool that allows users to easily upload new data into their Hadoop cluster or, conversely, pull existing data from their cluster onto BDD for exploration and discovery. It also includes a robust set of functions that allows users to test and perform transformations on their data on the fly in order to get it into the best possible working state.

In this post, we’ll explore a scenario where we take a basic spreadsheet and upload it to BDD for discovery. In another post, we’ll take a look at how BDD takes advantage of Hadoop’s distributed architecture and parallel processing power. Later on, we’ll see how BDD works with an existing data set in Hive.

We installed our instance of BDD on Cloudera’s latest distribution of Hadoop, CDH 5.3. From our perspective, this is a stable platform for BDD to operate on. Cloudera customers also should have a pretty easy time setting up BDD on their existing clusters.

Explore

Getting started with BDD is relatively simple. After uploading a new spreadsheet, BDD automatically writes the data to HDFS, then indexes and profiles the data based on some clever intuition:What you see above displays just a little bit of the magic that BDD has to offer. This data comes from the Consumer Financial Protection Bureau, and details four years’ worth of consumer complaints to financial services firms. We uploaded the CSV file to BDD in exactly the condition we received it from the bureau’s website. After specifying a few simple attributes like the quote character and whether the file contained headers, we pressed “Done” and the application got to work processing the file. BDD then built the charts and graphs displayed above automatically to give us a broad overview of what the spreadsheet contained.

As you can see, BDD does a good job presenting the data to us in broad strokes. Some of the findings we get right from the start are the names of the companies that have the most complaints and the kinds of products consumers are complaining about.

We can also explore any of these fields in more detail if we want to do so:

Screen-Shot-2015-02-02-at-1.56.17-PM

Now we get an even more detailed view of this date field, and can see how many unique values there are, or if there are any records that have data missing. It also gives us the range of dates in the data. This feature is incredibly helpful for data profiling, but we can go even deeper with refinements.

Capture1

With just a few clicks on a couple charts, we have now refined our view of the data to a specific company, JPMorgan Chase, and a type of response, “Closed with monetary relief”. Remember, we have yet to clean or manipulate the data ourselves, but already we’ve been able to dissect it in a way that would be difficult to do with a spreadsheet alone. Users of OEID and other discovery applications will probably see a lot of familiar actions here in the way we are drilling down into the records to get a unique view of the data, but users who are unfamiliar with these kinds of tools should find the interface to be easy and intuitive as well.

Transform

Another way BDD differentiates itself from some other discovery applications is with the actions available under the “Transform” tab.

Within this section of the application, users have a wealth of common transformation options available to them with just a few clicks. Operations like converting data types, concatenating fields, and getting absolute values now can be done on the fly, with a preview of the results available in near real-time.

BDD also offers more complex transformation functions in its Transformation Editor, which includes features like date parsing, geocoding, HTML formatting and sentiment analysis. All of these are built-in to the application; no plug-ins required. Another nice feature BDD provides is an easy to way group (or bin) attributes by value. For example, we can find all the car-related financing companies and group them into a single category to refine by later on:

Transform021

Another nice added feature of BDD is the ability to preview the results of a transform before committing the changes to all the data. This allows a user to fine tune their transforms with relative ease and minimal back and forth between data revisions.

Once we’re happy with our results, we can commit the transforms to the data, at which point BDD launches a Spark job behind the scenes to apply the changes. From this point, we can design a discovery interface that puts our enriched data set to work.

Discover

Included with BDD are a set of dynamic, advanced data visualizations that can turn any data set into something profoundly more intuitive and usable:

Discover01

The image above is just a sampling of the kind of visual tools BDD has to offer. These charts were built in a matter of minutes, and because much of the ETL process is baked into the application, it’s easy to go back and modify your data as needed while you design the graphical elements. This style of workflow is drastically different from workflows of the past, which required the back- and front-ends to be constructed in entirely separate stages, usually in totally different applications. This puts a lot of power into the hands of users across the business, whether they have technical chops or not.

And as we mentioned earlier, since BDD’s indexing framework is a close relative to Endeca, it inherits all the same real-time processing and unstructured search capabilities. In other words, digging into your data is simple and highly responsive:

Discover02

As more and more companies and institutions begin to re-platform their data onto Hadoop, there will be a growing need to effectively explore all of that distributed data. We believe that Oracle’s Big Data Discovery offers a wide range of tools to meet that need, and could be a great discovery solution for organizations that are struggling to make sense of the vast stores of information they have sitting on Hadoop.

If you would like to learn more, please contact us at info [at] ranzal.com.

Also be sure to stay tuned for Part 2!

Data Discovery In Healthcare — 1st Installment

Interested to understand how cutting edge healthcare providers are turning to data discovery solutions to unlock the insights in their medical records?  Check out this real-world demonstration of what a recent Ranzal customer is doing to unlock a 360 degree view of their clinical outcomes leveraging all of their EMR data — both the structured and unstructured information.

Take a look for yourself…

Leveraging Your Organization’s OBI Investment for Data Discovery

Coupling disparate data sets into meaningful “mashups” is a powerful way to test new hypotheses and ask new questions of your organization’s data.  However, more often than not, the most valuable data in your organization has already been transformed and warehoused by IT in order to support the analytics needed to run the business.  Tools that neglect these IT-managed silos don’t allow your organization to tell the most accurate story possible when pursuing their discovery initiatives.  Data discovery should not focus only on the new varieties of data that exist outside your data warehouse.  The value from social media data and machine generated data cannot be fully realized until it can be paired with the transactional data your organization already stockpiles.

Judging by the heavy investment in a new “self-service” theme in the recently released version 3.1 of Endeca Information Discovery, this truth has not been lost on Oracle.

Companies that are eager to get into the data discovery game, yet are afraid to walk away from the time and effort they’ve poured into their OBI solution, can breathe a little easier.  Oracle has made the proper strides in the Endeca product to incorporate OBI into the discovery experience.

And unlike other discovery products on the market today, the access to these IT-managed repositories (like OBI) is centrally managed.  By controlling access to the data and keeping all data “on the platform”, this centralized management allows IT to avoid the common “spreadmart” problem that plagues other discovery products.

Rather than explain how OBI has been introduced into the discovery experience, I figured I would show you.  Check out this short 4 minute demonstration which illustrates how your organization can build their own data “mashups” leveraging the valuable data tied up in OBI.

 

 

Chances are that a handful of these tested hypotheses will unlock new ways to measure your business.  These new data mashups will warrant permanent applications that are made available to larger audiences within your organization.  The need for more permanent applications will require IT to “operationalize” your discovery application — introducing data updates, security, and properly sized hardware to support the application.

For these IT-provisioned applications, Oracle has also provided some tooling in Endeca to make the job more straightforward.  Specifically, when it comes to OBI, the product now boasts a wizard that will produce a Integrator project with all of the plumbing necessary to pull data tied up in OBI into a discovery application in minutes.  Check out this video to see how:

 

 

It is product investments like these that will allow organizations to realize the transformative effects data discovery can have on their business without having to ignore the substantial BI investments already in place.

As always, please direct any questions or comments to [at] ranzal.com.

Data Discovery In Healthcare

A few days ago, QlikTech and Epic announced a technology partnership that will strengthen the integration between their software products as well as provide a forum for their joint customers to share best practices and innovative ways to use both technologies.

For a firm like Ranzal who is currently implementing several population health discovery applications, my first reaction was simply that this partnership made sense.  Both companies are leaders in their respective domains and are very well-regarded.  Beyond that, discovery technologies like Qlik, Tableau and Endeca are quickly establishing a foothold in the blossoming domain of healthcare analytics.  Unlike traditional BI technologies, data discovery tools are meant to quickly mashup disparate datasources and allow users to ask in-the-moment, unanticipated questions.  This alternative approach to analytics is allowing healthcare providers to build self-service discovery applications for broad audiences at speeds unimaginable in the world of the clinical data warehouse.  Since almost all healthcare analytics applications rely on data from the EMR, this partnership seemed natural, if not overdue.

My second reaction was that there was something missing.  In my experience, to get a holistic view of the health system, all of the relevant data must be tapped.  Data discovery on structured data, while powerful, can only tell party of the story.  With 60% of a health system’s data is tied up in unstructured medical notes, reports and journals, Qlik is not fully equipped to allow healthcare practitioners to gain a 360 degree view of their health system.

Endeca shines when structured and unstructured data are both required to paint a complete picture.  In healthcare, properly analyzing clinical data can mean drastically better outcomes at lower costs.  Understanding the “why” behind the “what” means properly tapping the narratives in the medical notes and tools like Endeca are best suited to unlock value when unstructured is prominent.

QlikView is a powerful tool and one cannot question its ease of use and numerous discovery features.  However, in industries rife with unstructured, products like Endeca that treat unstructured as a first class citizen (in the way it acquires, enriches, models, searches, and visualizes unstructured) are better suited to unlock the whole story.

So, I couldn’t help but think that a strong partnership could also be made between other EMR vendors with Oracle Endeca.  We spend a lot of time sizing up the relevant technologies in the data discovery space trying to understand differentiators.  For the types of discovery we’re seeing healthcare when unstructured is necessary to tell the whole story, our money remains on Endeca.

OEID 3.0 First Look – Democratizing Data Discovery

Adjectives like “agile” and “self-service” have long been used to describe approaches to BI that enable organizations to ask their own questions and produce their own answers.  Applied to both processes and products, these labels are applicable any time an organization can relax the “IT bottleneck”.  Over the past decade, the core tenets of the Endeca vision (“no data left behind, ease of use, and agile delivery”) have shaped a product that has empowered organizations to unlock insights in their enterprise data in ways never before possible while simultaneously reducing their reliance on IT to do so.  Notice I said “reduce” their reliance, not “eliminate”.

Data discovery is a quest not a destination.  It is a never-ending initiative.  As soon as new truths come to light from your discovery apps, inevitably, new questions arise as well.  Ideally, these new questions can be answered within the application at hand.  Sometimes, however, finding answers to these new questions requires experimentation and alternative data “mash-ups”.  Almost always in these cases, the time comes to pick up the phone, call IT……and wait.

All of the discovery tools on the market today that promise self-service and agility still require IT’s involvement when new data sources or new data models are required, OEID included.  However, through some new features in the the latest v3.0 release, it appears as if Oracle is making strides to address this dependency.

Granted this is just one man’s opinion and largely speculative, but a few of the new features in the product have me convinced that Oracle is pushing to democratize data discovery.  Through subtle (and not so subtle) changes, it seems they’re shifting the product to a platform — one that empowers the business to broaden their own exploration and answer the next round of questions, further reducing your organizations reliance on IT.

 

Here’s what got me thinking

 

A Collaboration Platform

The revamped “home page” experience surfaces new ways to provision and share your applications.  Casual users can now create their own applications, associate them to a data domain, and start composing their apps.  Initially, the applications are “private”, and only made accessible to a group of users hand-picked by you.  You can make your application “public” once you feel it is ready for the prime-time and mass consumption.

Self-Service Data Upload

Another nod in v3.0 to democratization comes with the introduction self-service data upload.  Not only will the upload move data into your data domain, but it will profile your data and (usually) arrive at the proper attribute configuration (data types, etc.)   Currently, this only supports Excel file formats, but if you’re like me, you can see where this is heading…

Excel_upload

Better Cluster Management

At first I was a bit miffed by Endeca Server’s move from Jetty to WebLogic 11g (and even a little frustrated by the involved installation process), but reading the v3.0 literature around improved cluster management, it became clear that more sophistication in the cluster support might mean there is a future in the cloud for the product.  Adding and subtracting nodes from your data domains will be required if end users are actively adding more data or opening up their data mashups to more users in their organization.  Elastic computing would have to underpin such a platform with such dynamic, unpredictable resource demands.

A Vision

Again, this is just one man’s hope for the product.  These changes indicate a shift in the way “self-service” is approached.  In future releases, “self-service” and “agile” BI may no longer mean simply asking your own unanticipated questions.  It may mean introducing new data, new applications and collaborating across the enterprise to further fulfill the promise of data discovery without IT.

I hope Oracle continues down this path.  I long for a future where data discovery happens in the cloud so organizations do not have to fumble with infrastructure, scale and upgrades.  I see a future with data uploads across a variety of formats which can then be added to a data marketplace within the product for the whole organization to leverage.  I hope for new capabilities in Studio so that the data configuration, joining, and cleansing that happens in integrator today by ETL experts and data stewards can be accomplished intuitively by the end users and analysts.

It is my hope that 3.0 is not the end game, but the first step of many towards democratizing data discovery and offering a broader definition to “self-service” BI.

 

 

OEID Incremental Updates

A fairly common approach…

More often than not, when pulling data from a database into OEID, we need to employ incremental updates.   To introduce incremental updates, we need a way to identify which records have been added, updated or deleted since our last load.  This change identification is commonly referred to as change data capture, or CDC.  There is no one way to accomplish CDC and often the best approach is dictated by the mechanisms in place in the source system.  Usually the database we’re pulling from isn’t leveraging any explicit change data capture (CDC) mechanism.

Note: If you’re pulling from text files and new records are being appended, you can look at the incremental reading feature of the UniversalDataReader component (pg. 268). http://docs.oracle.com/cd/E29805_01/integrator.230/DataIntegratorDesigner.pdf.

If you’re pulling from a database, and don’t have explicit database CDC features enabled, best practices usually dictate you create an “audit” or “execution_history” table to track previous full and incremental loads. This “audit” table simply records the date and time a load started and the date and time it ended, if it ended successfully. You would need to INSERT into this table before calling your incremental load graph in Integrator. Thus, when reading your table (or, better yet, denormalized view), you could issue your SQL SELECT statement with a few other WHERE conditions that leverage a “last_update_date” column in your view like so….

SELECT * FROM <View>
WHERE view_last_update_date >= (SELECT MAX(run_start_date) FROM audit_table WHERE run_status = 'Complete')
AND view_last_update_date < (SELECT MAX(run_start_date) FROM audit_table)

 

Once this incremental load graph completes, you’d need to update your audit table row with the end datetime of the run and the status=”Complete” flag.