2015 in Big Analytics

Looking back at 2015, a few stories stand out:

  • Steady progress for Spark, punctuated by two big announcements.
  • Solid growth in cloud-based machine learning, led by Microsoft.
  • Expanding options for SQL and OLAP on Hadoop.

In 2015, the most widely read post on this blog was Spark is Too Big to Fail, published in April.  I wrote this post in response to a growing chorus of snark about Spark written by folks who seemed to know little about the project and its goals.

IBM Embraces Spark

IBM’s commitment to Spark, announced on Jun 15, lit up the crowds gathered in San Francisco for the Spark Summit.  IBM brings a number of things to Spark: deep pockets to build a community, extensive technical resources and a large customer base.  It also brings a clutter of aging and partially integrated products, an army of suits and no less than 164 Vice Presidents whose titles include the words “Big Data.”

When IBM announced its Spark initiative I joked that somewhere in the bowels of IBM, someone will want to put Spark on a mainframe.  Color me prophetic.

It’s too early to tell what substantive contributions IBM will make to Spark.  Unlike Mesosphere, Typesafe, Tencent, Palantir, Cloudera, Hortonworks, Huawei, Shopify, Netflix, Intel, Yahoo, Kixer, UC Berkeley and Databricks, IBM did not help test Release 1.5 in September.  This is a clear miss, given the scope of IBM’s resources and the volume of hype it puts out about its commitment to the project.

All that said, IBM brings respectability, and the assurance that Spark is ready for prime time.  This is priceless.  Since IBM’s announcement, we haven’t heard a peep from the folks who were snarking at Spark earlier this year.

Cloudera Announces “One Platform” Initiative

In September, Cloudera announced its One Platform initiative to unify Spark and Hadoop, an announcement that surprised everyone who thought Spark and Hadoop were already pretty well integrated.  As with the IBM announcement, the symbolism matters.  Some analysts took this announcement to mean that Cloudera is replacing MapReduce with Spark, which isn’t exactly true.  It’s fairer to say that in Cloudera’s vision, Hadoop users will rely more on Spark in the future than they do today, but MapReduce is not dead.

The “One Platform” positioning has more to do with Cloudera moving to stem the tide of folks who use Spark outside of Hadoop.  According to Databricks’ recent Spark user survey, only 40% use Spark under YARN, with the rest running in a freestanding cluster or on Mesos.  It’s an understandable concern for Cloudera; I’ve never heard a fish seller suggest that we should eat less fish.  But if Cloudera thinks “One Platform” will stem that tide, it is mistaken.  It all boils down to use cases, and there are many use cases for Spark that don’t need Hadoop’s baggage.

Microsoft Builds Credibility in Analytics

In 2015, Microsoft took some big steps to demonstrate that it offers serious solutions for analytics.  The acquisition of Revolution Analytics, announced in January, was the first step; in one move, Microsoft acquired a highly skilled team and valuable software assets.  Since the acquisition, Microsoft has rolled Revolution’s enhanced R distribution into SQL Server and Azure, opening both platforms to the large and growing R community.

Microsoft’s other big move, in February, was the official launch of Azure Machine Learning (AML).   First released in beta in June 2014, AML is both easy to use and powerful.  The UI is simple to understand, and documentation is excellent; built-in analytic functionality is very rich, and the tool is extensible with custom R or Python scripts.  Microsoft’s trial user program is generous, and clearly designed to encourage adoption and use.

Azure Machine Learning contrasts markedly with Amazon Machine Learning.  Amazon’s offering remains a skeleton, with minimal functionality and an API only a developer could love.  Microsoft is clearly making a play for the data science market as a way to leapfrog Amazon.  If analytic capabilities are driving your choice of cloud platform, Azure is by far your best option.

SQL Engines Proliferate

At the beginning of 2015, there were two main options for SQL on Hadoop: Hive for batch SQL and Impala for interactive SQL.  Spark SQL was still in Alpha; Drill was a curiosity; and Presto was something used at Facebook.

Several things happened during the year:

  • Hive on Tez established rough performance parity with the fast SQL engines.
  • Spark SQL went to general release, stabilized, and rolled out the DataFrames API.
  • MapR promoted Drill, and invested in improvements to the software.  Also, MapR’s Drill team spun off and started Dremio to provide commercial support.
  • Cloudera donated Impala to open source, and Pivotal donated Hawq.
  • Teradata placed its chips on Presto.

While it’s great to see so many options emerge, Hive continues to win actual evaluations.  Given Hive’s large user and contributor base and existing stock of programs, it’s unclear how much traction Hive alternatives have now that Hive on Tez offers competitive performance.  Obviously, Cloudera doesn’t think Impala offers a competitive advantage anymore, or they would not have donated the assets to Apache.

The other big news in SQL is TPC’s release of a benchmarking standard for decision support with Big Data.

OLAP on Hadoop Gets Real

For folks seeking to perform dimensional analysis in Hadoop, 2015 delivered not one but two options.  The open source option, Apache Kylin, originally an eBay project, just recently graduated to Apache top level status.  Adoption is limited at present, but any project used by eBay and Baidu is worth a look.

The commercial option is AtScale, a company that emerged from stealth in April.  Unlike BI-on-Hadoop vendors like Datameer and Pentaho, AtScale provides a dimensional layer designed to work with existing BI tools.  It’s a nice value proposition for companies that have already invested big time in BI tools, and don’t want to add another UI to the mix.

Funding for Machine Learning

H2O.ai’s recently announced B round is significant for a couple of reasons.  First, it validates H2O.ai’s true open source business model; second, it confirms the continued growth and expansion of the user base for H2O as well as H2O.ai’s paid subscription base.

Like Sherlock Holmes’ dog that did not bark, two companies are significant because they did not procure funding in 2015:

  • Skytree, whose last funding round closed in April 2013, churned its executive team and rebranded a couple of times.  It finally listed some new customers; interestingly, some are investors and others are affiliated with members of Skytree’s Board.
  • Alpine Data Labs, last funded in November 2013, struggled to distance itself from the Pivotal ecosystem.  Designed to run on Greenplum, Alpine offers limited functionality on Hadoop, which makes it unclear how this company survives.

Palantir continued to suck up capital like a whale feeding on krill.

Google TensorFlow

Google open sourced TensorFlow, so now we have sixteen open source Deep Learning frameworks instead of just fifteen.

Forrester “Wave” for Predictive Analytics

Last week, Forrester published its 2015 “Wave” report for Big Data Predictive Analytics Solutions.  You can pay $2,495 and buy it directly from Forrester (here), or you can get the same report for free from SAS (here).

The report is inaptly named, as it commingles software that scales to Big Data (such as Alpine Chorus) with software that does not scale (such as Dell Statistica.)  Nor does Big Data capability appear to impact the ratings; otherwise Alpine and Oracle would have scored higher than they did, and SAP would have scored lower.  IBM SPSS alone does not scale without Netezza or BigInsights; SAS only scales if you add one of its distributed in-memory back ends.  These products aren’t listed among the evaluated software components.

Also, Forrester seriously needs to hire an editor.  Alteryx does not currently offer software branded as “Alteryx Analytics”, nor does SAS currently offer a bundle called the “SAS Analytics Suite.”

Forrester previously published this wave in 2013; key changes since then:

  • Among the Leaders, IBM edged past SAS for the top rating.
  • SAP’s rating did not change but its brand presence improved considerably, which demonstrates the uselessness of brand presence as a measure of value.
  • Oracle showed up at the beauty show this time, and improved its position slightly.
  • Statistica’s rating did not change, but its brand presence improved due to the acquisition by Dell.  (See SAP, above).  Shockingly, the addition of “Toad Data Point” to the Dell/Statistica solution did not move the needle.
  • Angoss improved its ratings and brand strength slightly.
  • TIBCO and Salford switched their analyst relations budgets from Forrester to Gartner and are gone from this report.
  • KXEN and Revolution Analytics are also gone due to acquisitions.  Interestingly, the addition of KXEN to SAP had no impact on SAP’s ratings, thus demonstrating that two plus zero is still two.
  • RapidMiner, Alteryx, FICO, Alpine, KNIME and Predixion are all new to the report.

Gartner issued its “Magic Quadrant” back in February; the comparisons are interesting:

  • KNIME is a “leader” in Gartner’s view, while Forrester considers the product to be decidedly mediocre.  Seems to me that Forrester has it about right.
  • Oracle did not participate in the Gartner MQ.
  • RapidMiner, a “leader” in the Gartner MQ, scores very well on Forrester’s “Current Offering” axis, but less well on “Strategy.”   This strikes me as a good way for Forrester to sell strategy consulting.
  • Microsoft and Alpine landed in Gartner’s Visionary quadrant but scored relatively low in Forrester’s assessment.  Both vendors have appealing strategies, and need to roll up their sleeves to deliver.
  • Predixion trails the pack in both reports.  Reminds me of high school gym class.

Forrester’s methodology places more weight on the currently available software, while Gartner places more emphasis on the vendor’s “vision.”  Vision is certainly important to consider when selecting a software vendor, but leadership tends to be self-sustaining; today’s category leaders are likely to be tomorrow’s category leaders, except when markets are disrupted — in which case analysts are rarely able to pick winners.

Gartner Advanced Analytics Magic Quadrant 2015

Gartner’s latest Magic Quadrant for Advanced Analytics is out; for reference, the 2014 report is here; analysis from Doug Henschen here.  Key changes from last year:

  • Revolution Analytics moves from Visionary to Niche
  • Alpine and Microsoft move from Niche to Visionary
  • Oracle, Actuate and Megaputer drop out of the analysis
Gartner 2015 Magic Quadrant, Advanced Analytics
Gartner 2015 Magic Quadrant, Advanced Analytics

Gartner changed its evaluation criteria this year to reflect only “native” (e.g. proprietary) functionality; as a result, Revolution Analytics dropped from Visionary to Niche.   Other vendors, it seems, complained to Gartner that the old criteria were “unfair” to those who don’t leverage open source functionality.  If Gartner applies this same reasoning to other categories, it will have to drop coverage of Hortonworks and evaluate Cloudera solely on the basis of Impala.  🙂

Interestingly, Gartner’s decision to ignore open source functionality did not impact its evaluation of open source vendors RapidMiner and KNIME.

Based on modest product enhancements from Version 4.0 to Version 5.0, Alpine jumped from Niche to Visionary.   Gartner’s inclusion criteria for the category mandate that “a vendor must offer advanced analytics functionality as a stand-alone product…”; this appears to exclude Alpine, which runs in Pivotal Greenplum database (*).  Gartner’s criteria are flexible, however, and I’m sure it’s purely coincidental that Gartner analyst Gareth Herschel flacks for Alpine.

(*) Yes, I know — Alpine supports other databases and Hadoop as well.   The number of Alpine customers who use it in anything other than Pivotal can meet in Starbucks at one of the little tables in the back.

Gartner notes that Alpine “still lacks depth of functionality. Several model techniques are either absent or not fully developed within its tool.”  Well, yes, that does seem important.   Alpine’s promotion to Visionary appears to rest on its Chorus collaboration capability (originally developed by Greenplum).  It seems, however, that customers don’t actually use Chorus very much; as Gartner notes, “adoption is currently slow and the effort to boost it may divert Alpine’s resources away from the core product.”

Microsoft’s reclassification from Niche to Visionary rests purely on the basis of Azure Machine Learning (AML), a product still in beta at the time of the evaluation.  Hardly anyone uses MSFT’s “other” offering for analytics (SQL Server Analytic Services, or SSAS), apparently for good reason:

  • “The 2014 edition of SSAS lacks breadth, depth and usability, in comparison with the Leaders’ offerings.”
  • “Microsoft received low scores from SSAS customers for its willingness to incorporate their feedback into future versions of the product.”
  • “SSAS is a low-performing product (with poor features, little data exploration and questionable usability.”

On paper, AML is an attractive product, though it maxes out at 10GB of data; however, it seems optimistic to rate Microsoft as “Visionary” purely on the basis of a beta product.  “Visionary” is a stretch in any case — analytic software that runs exclusively in the cloud is by definition a niche product, as it appeals only to a certain segment of the market.  AML’s most attractive capabilities are its ability to run Python and R — and, as we noted above — these no longer carry any weight with Gartner.

Dropping Actuate and Megaputer from the MQ simply recognizes the obvious.  It’s not clear why these vendors were included last year in the first place.

It appears that Oracle chose not to participate in the MQ this year.  Analytics that run in a single database platform are by definition niche products — you can’t use Oracle Advanced Analytics if you don’t have Oracle Database, and few customers will choose Oracle Database because it has Oracle Advanced Analytics.

 

Still More Comments on Microsoft and Revolution Analytics

Three full business days post-announcement, and stories continue to roll in.

Stephen Sowyer of TDWI writes an excellent summary of what Microsoft will likely do with Revolution Analytics.  He correctly notes, for example, that Microsoft is unlikely to develop a business user interface for R with code-generating capabilities (comparable to SAS Enterprise Guide, for example).  This is difficult to do, and the demand is low; people who care about R tend to like working in a programming environment, and value the ability to write their own code.  Business users, on the other hand, tend to be indifferent about the underlying code generated by the application.

Since Revolution’s Windows-based IDE requires some investment to keep it competitive, the most likely scenario is that Microsoft will add R to the Visual Studio suite.

Mr. Sowyer also notes that popular data warehouses (such as Oracle, IBM Netezza and Teradata Aster) can run R scripts in-database.  While this is true, what these databases cannot do is run R scripts in distributed mode, which limits the capability to embarrassingly parallel tasks.  Enabling R scripts to run in distributed databases — necessary for Big Data — is a substantial development project, which is why Revolution Analytics completed only two such ports (one to Hadoop and one to Teradata).

While Microsoft’s deep pockets give Revolution Analytics the means to support more platforms, they still need the active collaboration of database vendors.  Oracle and Pivotal have their own strategies for R, so partnerships with those vendors is unlikely.

For some time now, commercial database vendors have attempted to differentiate their product by including machine learning engines.  Teradata was the first, in 1987, followed by IBM DB2 in 1992; SQL Server followed in the late 1990s, and Oracle acquired what was left of Thinking Machines in 1999 primarily so it could build Darwin software for predictive analytics into Oracle database.  None of these efforts has gained much traction with working analysts. for several reasons: (1) database vendors generally sell to the IT organization and not to an organization’s end users; (2) as a result, most organizations do not link the purchase decision for databases and analytics; (3) users for predictive analytics tend to be few in number compared to SQL and BI users, and their needs tend to get overlooked.

Bottom line: I think it is doubtful that Microsoft will pursue enabling R to run in relational databases other than SQL Server, and they will drop Revolution’s “Write Once Deploy Anywhere” tagline, as it is impossible to deliver.

Elsewhere, Mr. Dan Woods doubles down on his argument that Microsoft should emulate Tibco, which is like arguing that the Seattle Seahawks should emulate the Jacksonville Jaguars.  Sorry, JAX; it just wasn’t your year.

 

More Comments on Microsoft + Revolution Analytics

My inbox continues to fill with Google Alerts about Microsoft’s announced purchase of Revolution Analytics — too numerous to link.

Most of these stories simply repackage the Microsoft announcement.

Clint Boulton of the WSJ’s CIO Journal writes one of the best analyses:

Microsoft is betting on the timeliness of its acquisition as more businesses adopt analytics. Revolution’s software helps companies use R, an open source programming language that more than two million programmers use daily to build predictive models. R is popular among university computer science students, many of whom continue to use it in their careers as data scientists.

Data scientists who extract data from of a data warehouse or Hadoop processing system, use R to slice and dice it for insights, and visualize the results. But businesses analyzing financial, social media and other data often need to scale the analytics across clusters of computers.

Several analysts pass along the factoid that two million people use R.   The truth is that nobody has any idea how many people use R; we don’t even know how many have downloaded the software.  The New York Times pointed out the difficulty in its piece five years ago:

While it is difficult to calculate exactly how many people use R, those most familiar with the software estimate that close to 250,000 people work with it regularly.

It’s possible that R has gained 1,750,000 users in the interceding five years.  It’s also possible that R has gained 10,000,000 users.  “Those most familiar with the software” are simply guessing.

While most analysts are neutral to positive on Microsoft’s move, Mr. Dan Woods takes a contrary view.  In an article published in Forbes and cross-posted on multiple platforms, Mr. Woods argues that Microsoft was wrong to buy Revolution Analytics, and instead should buy Tibco.   (That is the implication of his argument that Microsoft should “emulate” Tibco, since the only way to “emulate” Tibco is to own the clump of software Tibco packages up as TERR.)

Mr. Woods is a “content specialist”, as freelance writers call themselves today, and his expertise in analytics is exemplified by his most recent book, Wikis for Dummies, published in 2007.  One suspects that the private equity firm that acquired Tibco in September is peddling the pieces, and has engaged “content specialists” to bang the drum.

Mr. Woods gets two things right.  It’s true that R is a mess, and it is also true that the GPL license makes R difficult to commercialize.  R’s messiness is a byproduct of crowdsourced development; it is a feature to its devotees and a bug for everyone else.  (For those who simply cannot tolerate R’s messiness there is a simple solution: use Python.)  Under the GPL license, any enhancements become part of the free distribution, so if you distribute a product built with R you must share the source code of your product as well.

At the crux of his argument, though, Mr. Woods gets it wrong:

Revolution Analytics has made a business, like many open source-based companies, of supporting Open Source R.

This is factually incorrect.  Revolution only recently started to offer a consulting service for open source R users; for most of its history, its business was built around Revolution R Enterprise, a commercially supported enhanced R distribution.  This is not a trivial distinction.  Cloudera Hadoop, for example, is based on Apache Hadoop, but it is not the same thing; while many enterprises use commercially supported Hadoop distributions (from vendors like Cloudera, Hortonworks or MapR), hardly anyone uses open source Apache Hadoop in production.

The same is true for R; while many enterprises have an issue using open source R, they are willing to deploy commercially supported R distributions (such as Oracle R or Revolution R).  This is the business Microsoft enters by acquiring Revolution Analytics.

Regarding Mr. Woods’ point about the need to rebuild R from the ground up, that is neither possible nor necessary.  The GPL license prevents anyone from “rebuilding” R as a commercial venture; if anyone “rebuilds” the language it will be the open source development team itself.

In any case, one need not “make R scale” — one need only provide an R API to other platforms (such as Apache Spark or dbLytix) that can scale, so that R users can interface with them.   This is the approach taken by Revolution Analytics’ ScaleR software, which is actually written in C, but includes an interface from the R programming language.  By building this component into Azure, Microsoft can offer those who use R locally a scaleable back end.

Update: Mr. Woods doubles down here.

Microsoft Buys Revolution Analytics

On Friday, January 23, Microsoft announced an agreement to acquire Revolution Analytics.  Coverage of the announcement in the media is extensive, with stories by TechCrunchWiredZDNetVentureBeat and many others (here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here and here.)

Microsoft did not disclose the negotiated purchase price; Revolution’s total capitalization is around $40 million.  Given Revolution’s scale of operations, the acquisition will have minimal impact on Microsoft’s near-term revenue and profit.

Many analysts follow Microsoft, but few have heard of Revolution Analytics, and most seem to be stumped by this move.  An example:

Question: What is the significance of Microsoft acquiring Revolution Analytics?

Answer: I am not sure.

Microsoft gets four things with this deal:

  • Instant credibility with the growing open source analytics community
  • Consulting and support skills to help enterprise customers adopt R
  • A capable engineering organization (conveniently located in Seattle)
  • Software bits that should integrate well with the Microsoft stack

In addition to its primary offering, Revolution R Enterprise, Revolution distributes Revolution R Open, an enhanced free distribution of open source R; and Revolution R Cloud an elastic offering on the AWS Marketplace.  Revolution R Open is equivalent in many respects to Oracle R Distribution, which is also compiled with the Intel Math Kernel Libraries.  Revolution R Plus is commercially supported, and includes additional software bits for enterprise integration; this product is comparable to Oracle R Enterprise.

Revolution Analytics’ other key software assets include ScaleR, a distributed out-of-memory back end with a strong R interface; DeployR, a component that supports enterprise deployment of web-based applications; and DevelopR, a Windows-based IDE.

While the IDE has a number of useful features, it requires significant investment to compete effectively with RStudio, which has won the hearts and minds of R users.  Upgrading software simply to make it competitive with a “free” competitor strikes me as a dubious commercial move; it seems more likely that Microsoft will add an R capability to the Visual Studio suite.

Revolution’s ScaleR back end enables R users to leverage a platform for distributed analytics.  ScaleR already runs on Windows Server HPC clusters, which should make integration with Azure a straightforward matter.  This is important for Microsoft, since Azure Machine Learning currently maxes out at around 10Gb.

ScaleR’s integration with Hadoop currently runs through MapReduce; competing best-in-class Hadoop analytics (such as Spark, H2O, Skytree and SAS) run in memory for better performance.  Microsoft’s deep pockets give Revolution the means to make this product competitive.

Distributed Analytics: A Primer

Can we leverage distributed computing for machine learning and predictive analytics? The question keeps surfacing in different contexts, so I thought I’d take a few minutes to write an overview of the topic.

The question is important for four reasons:

  • Source data for analytics frequently resides in distributed data platforms, such as MPP appliances or Hadoop;
  • In many cases, the volume of data needed for analysis is too large to fit into memory on a single machine;
  • Growing computational volume and complexity requires more throughput than we can achieve with single-threaded processing;
  • Vendors make misleading claims about distributed analytics in the platforms they promote.

First, a quick definition of terms.  We use the term parallel computing to mean the general practice of dividing a task into smaller units and performing them in parallel; multi-threaded processing means the ability of a software program to run multiple threads (where resources are available); and distributed computing means the ability to spread processing across multiple physical or virtual machines.

The principal benefit of parallel computing is speed and scalability; if it takes a worker one hour to make one hundred widgets, one hundred workers can make ten thousand widgets in an hour (ceteris paribus, as economists like to say).  Multi-threaded processing is better than single-threaded processing, but shared memory and machine architecture impose a constraint on potential speedup and scalability.  In principle, distributed computing can scale out without limit.

The ability to parallelize a task is inherent in the definition of the task itself.  Some tasks are easy to parallelize, because computations performed by each worker are independent of all other workers, and the desired result set is a simple combination of the results from each worker; we call these tasks embarrassingly parallel.   A SQL Select query is embarrassingly parallel; so is model scoring; so are many of the tasks in a text mining process, such as word filtering and stemming.

A second class of tasks requires a little more effort to parallelize.  For these tasks, computations performed by each worker are independent of all other workers, and the desired result set is a linear combination of the results from each worker.  For example, we can parallelize computation of the mean of a distributed database by computing the mean and row count independently for each worker, then compute the grand mean as the weighted mean of the worker means.  We call these tasks linear parallel.

There is a third class of tasks, which is harder to parallelize because the data must be organized in a meaningful way.  We call a task data parallel if computations performed by each worker are independent of all other workers so long as each worker has a “meaningful” chunk of the data.  For example, suppose that we want to build independent time series forecasts for each of three hundred retail stores, and our model includes no cross-effects among stores; if we can organize the data so that each worker has all of the data for one and only one store, the problem will be embarrassingly parallel and we can distribute computing to as many as three hundred workers.

While data parallel problems may seem to be a natural application for processing inside an MPP database or Hadoop, there are two constraints to consider.  For a task to be data parallel, the data must be organized in chunks that align with the business problem.  Data stored in distributed databases rarely meets this requirement, so the data must be shuffled and reorganized prior to analytic processing, a process that adds latency.  The second constraint is that the optimal number of workers depends on the problem; in the retail forecasting problem cited above, the optimal number of workers is three hundred.  This rarely aligns with the number of nodes in a distributed database or Hadoop cluster.

There is no generally agreed label for tasks that are the opposite of embarrassingly parallel; for convenience, I use the term orthogonal to describe a task that cannot be parallelized at all.  In analytics, case-based reasoning is the best example of this, as the method works by examining individual cases in a sequence.  Most machine learning and predictive analytics algorithms fall into a middle ground of complex parallelism; it is possible to divide the data into “chunks” for processing by distributed workers, but workers must communicate with one another, multiple iterations may be required and the desired result is a complex combination of results from individual workers.

Software for complex machine learning tasks must be expressly designed and coded to support distributed processing.  While it is physically possible to install open source R or Python in a distributed environment (such as Hadoop), machine learning packages for these languages run locally on each node in the cluster.  For example, if you install open source R on each node in a twenty-four node Hadoop cluster and try to run logistic regression you will end up with twenty-four logistic regression models developed separately for each node.  You may be able to use those results in some way, but you will have to program the combination yourself.

Legacy commercial tools for advanced analytics provide only limited support for parallel and distributed processing.  SAS has more than 300 procedures in its legacy Base and STAT software packages; only a handful of these support multi-threaded (SMP) operations on a single machine;  nine PROCs can support distributed processing (but only if the customer licenses an additional product, SAS High-Performance Statistics).  IBM SPSS Modeler Server supports multi-threaded processing but not distributed processing; the same is true for Statistica.

The table below shows currently available distributed platforms for predictive analytics; the table is complete as of this writing (to the best of my knowledge).

Distributed Analytics Software, May 2014

Several observations about the contents of this table:

(1) There is currently no software for distributed analytics that runs on all distributed platforms.

(2) SAS can deploy its proprietary framework on a number of different platforms, but it is co-located and does not run inside MPP databases.  Although SAS claims to support HPA in Hadoop, it seems to have some difficulty executing on this claim, and is unable to describe even generic customer success stories.

(3) Some products, such as Netezza and Oracle, aren’t portable at all.

(4) In theory, MADLib should run in any SQL environment, but Pivotal database appears to be the primary platform.

To summarize key points:

— The ability to parallelize a task is inherent in the definition of the task itself.

— Most “learning” tasks in advanced analytics tasks are not embarrassingly parallel.

— Running a piece of software on a distributed platform is not the same as running it in distributed mode.  Unless the software is expressly written to support distributed processing, it will run locally, and the user will have to figure out how to combine the results from distributed workers.

Vendors who claim that their distributed data platform can perform advanced analytics with open source R or Python packages without extra programming are confusing predictive model “learning” with simpler tasks, such as scoring or SQL queries.

Smart Money: Venture Capital for Analytics 2013

Thanks to Crunchbase’s downloadable database, we can report that in 2013 investors poured more than $2 billion into Analytic startups, up 38% from 2012.  Crunchbase reports 2013 funding for Analytics ventures more than five times greater than in 2009.

Source: Crunchbase
Source: Crunchbase

Palantir led the pack in new funding, going to the well twice, in October and December, to raise a total of $304m based on a valuation of $9b.  As a point of reference, at 4X revenue, industry leader SAS is worth about $12b.

Funding flowed to companies that build advanced analytics into focused vertical or horizontal solutions.  Examples include:

Investors paid special attention to vendors who specialize in social media analytic platforms:

Capital also flowed to companies offering general-purpose software, platforms and services for analytics, including:

Investors continue to fund startups offering easy-to-use interfaces for the business user, including:

Top investors in Analytics for 2013 include:

Clearly, investors are placing bets on a robust future for analytics.

Strata Report: Advanced Analytics in Hadoop

Here is a quick review of the capabilities for advanced analytics in Hadoop for five vendors at the recent Strata NYC conference:

0XData

Product(s)

  • H20 (open source project)
  • h2o (R package)

Description

Smart people from Stanford with VC backing and a social media program.   Services business model with open source software.  H20 is an open source library of algorithms designed for deployment in Hadoop or free-standing clusters;  aggressive vision, but currently available functionality limited to GLM, k-Means, Random Forests.   Update: 0xData just announced H20 2.0, which includes Distributed Trees and Regression, such as Gradient Boosting Machine (GBM), Random Forest (RF), Generalized Linear Modeling (GLM), k-Means and Principal Component Analysis (PCA).  They also claim to run “100X faster than other predictive analytics providers”, although this claim is not supported by evidence.  R users can interface through h2o package.  Limited customer base.  Partners with Cloudera and MapR.

Key Points

  • True open source model
  • Comprehensive roadmap
  • Limited functionality
  • Limited user base
  • Performance claims undocumented

Alpine Data Labs

Product(s)

  • Alpine 2.8

Description

Alpine targets a business user persona with a visual workflow-oriented interface (comparable to SAS Enterprise Miner or SPSS Modeler).   Supports a reasonably broad range of analytic features.  Claims to run “in” a number of databases and Hadoop distributions, but company is opaque about how this works.  (Appears to be SQL/HiveQL push-down).   In practice, most customers seem to use Alpine with Greenplum.  Thin sales and customer base relative to claimed feature mix suggests uncertainty about product performance and stability.  Partners with Pivotal, Cloudera and MapR.

Key Points

  • Reasonable option for users already committed to Greenplum Database
  • Limited partner and user ecosystem
  • Performance and stability should be vetted thoroughly in POC

Oracle

Product(s)

Description

Oracle R Distribution (ORD) is a free distribution of R with bug fixes and performance enhancements; Oracle R Enterprise is a supported version of ORD with additional enhancements (detailed below).

Oracle Advanced Analytics (an option of Oracle Database Enterprise Edition) bundles Oracle Data Mining, a distributed data mining engine that runs in Oracle Database, and Oracle R Enterprise.   Oracle Advanced Analytics provides an R to SQL transparency layer that maps R functions and algorithms to native in-database SQL equivalents.  When in-database equivalents are not available, Oracle Advanced Analytics can run R commands under embedded R mode.

Oracle Connection to Hadoop  is an R interface to Hadoop; it enables the user to write MapReduce tasks in R and interface with Hive.  As of ORCH 2.1.0, there is also a fairly rich collection of machine learning algorithms for supervised and unsupervised learning that can be pushed down into Hadoop.

Key Points

  • Good choice for Oracle-centric organizations
  • Oracle Data Mining is a mature product with an excellent user interface
  • Must move data from Hadoop to Oracle Database to leverage OAA
  • Hadoop push-down from R requires expertise in MapReduce

SAS

Products

  • SAS/ACCESS Interface to Hadoop
  • SAS Scoring Accelerator for Cloudera
  • SAS Visual Analytics/SAS LASR Server
  • SAS High Performance Analytics Server

Description

SAS/ACCESS Interface to Hadoop enables SAS users to pass Hive, Pig or MapReduce commands to Hadoop through a connection and move the results back to the SAS server.   With SAS/ACCESS you can haul your data out of Hadoop, plug it into SAS and use a bunch of other SAS products, but that architecture is pretty much a non-starter for most Strata attendees.   Update:  SAS has announced SAS/ACCESS for Impala.

Visual Analytics is a Tableau-like visualization tool with limited predictive analytic capabilities; LASR Server is the in-memory back end for Visual Analytics.  High Performance Analytics is a suite of distributed in-memory analytics.   LASR Server and HPA Server can be co-located in a Hadoop cluster, but require special hardware.  Partners with Cloudera and Hortonworks.

Key Points

  • Legacy SAS connects to Hadoop, does not run in Hadoop
  • SAS/ACCESS users must know exact Hive, Pig or MapReduce syntax
  • Visual Analytics cannot work with “raw” data in Hadoop
  • Minimum hardware requirements for LASR and HPA significantly exceed standard Hadoop worker node specs
  • High TCO, proprietary architecture for all SAS products

Skytree

Product(s)

  • Skytree Server

Description

Academic machine learning project (FastLab, at Georgia Tech); with VC backing, launched as commercial software vendor January 2013.  Server-based technology, can connect to a range of data sources, including Hadoop.  Programming interface; claims ability to run from R, Weka, C++ and Python.  Good library of algorithms.  Partners with Cloudera, Hortonworks, MapR.  Skytree is opaque about technology and performance claims.

Key Points

  • Limited customer base, no announced sales since company launch
  • Hadoop integration is a connection, not “inside” architecture
  • Performance claims should be carefully vetted

SAS Analyst Conference: Take Two

Analyst comments about SAS’ 24th annual analyst conference continue to dribble out.   Ordinarily, events like this produce a storm of Google alerts, but this year the quiet speaks volumes.   Yesterday, Tony Cosentino of Ventana Research published his perspective on the conference, writing at length about SAS Visual Analytics; link here.

Here are a few quotes from Mr. Cosentino’s post, with my embedded comments.

“For SAS, the competitive advantage in Big Data rests in predictive analytics…

…a capability that is completely absent from the current version of SAS Visual Analytics, the software featured in Mr. Cosentino’s article.  The big “news” of the analyst conference is that SAS says they plan to add some toylike predictive analytics to Visual Analytics this year, which will give the application functional parity with, say, MicroStrategy vintage 1999.  I don’t completely understand why this is news at all, since SAS said they would do this at the analyst conference last year, but spent 2012 attempting to sell their other in-memory architecture without visible success.

“…according to our benchmark research into predictive analytics, 55 percent of businesses say the challenge of architectural integration is a top obstacle to rolling out predictive analytics in the organization.”

No doubt this is true, and SAS’ proprietary server-based architecture is one reason why this is a problem.  SAS/STAT, for example, is still one of the most widely used SAS products, and it exports predictive models to nothing other than SAS.  SAS Visual Analytics simply adds to the clutter by introducing an entirely new architecture into the mix that is hard to integrate with legacy SAS products in the same category.  For more details about the data integration challenges posed by SAS Visual Analytics, see my previous post.

“Integration of analytics is particularly daunting in a big-data-driven world, since analytics processing has traditionally taken place on a platform separate from where the data is stored…”

A trend that continues with SAS Visual Analytics, which is deployed on a platform separate from where the data is stored.

Jim Goodnight, the company’s founder and plainspoken CEO, says he saw the industry changing a few years ago. He speaks of a large bank doing a heavy analytical risk computation that took upwards of 18 hours, which meant that the results of the computation were not ready in time for the next trading day.

Banks have suffered serious performance issues with analytics for more than “a few years”.    And 18 hours is pretty good compared to some; there are organizations with processes that take days and weeks to run in SAS.

Goodnight also discussed the fact that building these parallelizing statistical models is no easy task. One of the biggest hurdles is getting the mathematicians and data scientists that are building these elaborate models to think in terms of the new parallelized architectural paradigm.

Really?  Parallelized algorithms for statistics and data mining are hardly new, and commercial versions first appeared on the market in 1994.  There are companies with a fraction of SAS’ headcount that are able to roll out parallelized algorithms without complaining about how hard it is to do.  A few examples:  Alpine Data Labs,  Fuzzy Logix,   Revolution Analytics (my current employer) and Skytree.

The biggest threat to SAS today is the open source movement, which offers big data analytic approaches such as Mahout and R.

If this is true, SAS Visual Analytics is not an effective response because it caters to a completely different user persona.  The biggest threats to SAS today are IBM, SAP and Oracle, who have the analytic tooling, deep pockets and credibility to challenge SAS in the enterprise analytics market.  SAS Visual Analytics seems more like an attempt to compete with SAP HANA.

At the same time, SAS possesses blueprints for major analytic processes across different industries as well as horizontal analytic deployments, and it is working to move these to a parallelized environment. This may prove to be a differentiator in the battle versus R, since it is unclear how quickly the open source R community, which is still primarily academic, will undertake the parallelization of R’s algorithms.

Actually, it’s already done.