Notes from the Summit
The Gartner Data and Analytics Summit 2019 in Orlando, that is. Good show. About 4,000 attendees, and most of the top ML and AI vendors. Folks who attend this show buy software. It’s a step up from some conferences, where you spend all day giving away stuff to people who can’t afford pants.
Orlando is fun.
This year, for the first time, Gartner included a track for data science and machine learning. Here are a few notes.
Bake-Offs and Showdowns
At previous events, Gartner organized “bakeoff” competitions with live demos by BI vendors. This year, for the first time, Gartner held a bakeoff for data science and machine learning vendors.
About 70 vendors applied to participate. Gartner selected three vendors for the event: Databricks, DataRobot (my employer), and SAS. Moderators shared some college admissions data and posed business questions. After each part of the bake-off, moderators polled the audience to see which vendor answered the questions most effectively.
Gartner asked participants not to disclose the results of audience polling. So I can’t say which vendor won all four parts of the competition. I won’t drop any hints. My lips are sealed.
Outside of the main bake-off, Gartner organized two “Showdowns.” Under rules of the Showdowns, competitors pre-recorded their demonstrations and replayed them with live voice-over. Some observations:
Big Squid demonstrated Kraken, their augmented analytics software. Kraken integrates nicely with Tableau. Unlike the other presenters, the demo included some nice visuals.
dotData covered up the opaqueness of their UI by adding informative banners to the video. That seems like cheating.
H2O.ai showed Driverless AI. The guy doing the demo was so busy bragging about the product that he failed to provide any insight into the business problem, even when prompted by the moderator.
An IBM guy showed how you can write Python in Jupyter in Watson Studio. As if anyone cares.
RapidMiner‘s founder Ingo Miersewa delivered his customary intelligent demo of the product.
There was no live audience polling in the Showdowns, so no clear winners. Based on the level of applause, I’d say that Big Squid won the crowd.
<Squints>. “Data-ickoo? Did I say that right?”
Crowds throng the DataRobot booth.
SAS: “Hey, we can do AI too!”
A week before the show, SAS revealed its 2018 revenue with a stealth edit to the website and no press release. Sure enough, in 2018 SAS barely matched its 2017 revenue.
SAS announced a “billion dollar investment” in AI a couple of days before the show. Wow! A billion dollars! Is this big news?
Not so fast, partner. SAS says it will spend that money over three years. The company spends 26% of its revenue on R&D, a figure consistent with the rest of the software industry. If current spending patterns hold, in the next three years, SAS will spend $2.5 billion on R&D.
Still, a billion dollars is 40% of R&D, right? No. Read the press release. The “billion dollars” includes spending on a building that SAS already planned to build, plus education, plus services. Also, note that SAS defines “AI Solutions” to include just about every product it offers.
At the show, SAS sponsored a lunch. Keith Collins delivered a presentation with nice pictures but no substance.
Appropriately enough, lunch was rubber chicken.
Oh, wait, that’s not a science project. That’s a demo of SAS AI. Bottles slowly chug past a camera on a loop. Some of the bottles are open, others are sealed. SAS is so smart it can detect the open bottles and eject them from the process.
At least that’s what the SAS rep says it’s supposed to do.
Mr. SAS Rep explains that the demo runs Python models.
“Really?” I ask. “You’ve got a Python runtime in there?”
“Well, uh, no, we uploaded a model through the Neural Network Exchange Format.”
Mr. SAS Rep explains that the exhibit illustrates SAS’ investment in AI with new software, SAS Event Stream Processing.
Oh, wait. SAS introduced ESP in 2014. It’s now in Release 5.2.
A bottle chugs past the camera.
“Event Stream Processing computes a score in a second!” brags SAS Rep.
Wow. One second latency. Take that claim to a factory automation show and they’ll laugh you right off the floor. It’s like going to a Ferrari dealer and hearing a salesman brag that the 488 Spider can go 35 miles an hour.
An opened bottle, which SAS is supposed to reject, chugs past. And keeps on going, unrejected. The SAS rep stares, aghast. “That’s not supposed to happen!”
I laugh and move on.
The prize for biggest bullshitter goes to Mr. Aaron Cheng of dotData, for his performance in Why do 85% of Enterprise Data Science Projects Fail?
The title of the presentation, as they say in courtroom dramas, “assumes facts not in evidence.” 85% of data science projects fail? Uh-huh. We all know that some data science projects fail. 85%? No.
Organizations spend about $6 billion each year on data science and machine learning software, and about 3X that on services. Spending grows at double digits.
If 85% of all data science projects failed, people would spend a lot less.
Unless, of course, you believe that everyone in the industry — buyers, sellers, and analysts — is stupid.
Somebody’s confused, and it isn’t me.
Mr. Cheng says he got this little factoid from “Gartner.” His source is this press release, which includes this sentence:
Only 15 percent of businesses reported deploying their big data project to production.
You can see the obvious problems here. Big data projects are not the same as data science projects; you can have successful data science projects with or without a big data project. Moreover, if you read Gartner’s press release, it’s clear that big data projects not yet in production haven’t failed; they’re still in the planning or pilot stage.
Mr. Cheng’s argument seems to be that since everyone in the data science business is failing, you should do business with dotData. That doesn’t strike me as a winning message.
Anyway, let’s have a round of applause for Mr. Cheng’s achievement.
Bullshitting: it’s a dirty job, but somebody has to do it.