Is AI Failing?

Written by:

Nobody believes that every AI project succeeds. Just ask MD Anderson. Anderson blew $60 million on a Watson project before pulling the plug.

That project was a clown show. A report published by University of Texas auditors found that project leadership:

  • Did not use proper contracting and procurement procedures
  • Failed to follow IT Governance processes for project approval
  • Did not effectively monitor vendor contract delivery
  • Overspent pledged donor funds by $12 million

IT personnel working on the project hesitated to report exceptions because the project leader’s husband was MD Anderson’s President. Project scope grew like kudzu. MD Anderson executed 15 contracts and amendments in a series of incremental expansions. The budget for many of these was just below the threshold for Board approval, which suggests deliberate structuring to avoid scrutiny.

Interestingly, the massive expansion in project scope coincided with a $50 million pledge from “billionaire party boy” Low Taek Jho. (Jho recently cut a deal with the US government to avoid prosecution on charges related to the 1MDB scandal.)

So it’s not news that some AI projects fail. 

Last week, Fast Company published this piece with the clickbait title of Why AI is Failing Business. The authors, an economist and the two co-founders of a tiny startup, want you to believe that failure is the norm for AI projects. 

The article exemplifies a genre I call Everyone is Stupid Except Us. Practitioners of this approach paint a dire picture of current practices. The implicit message is that they have a magic bean that will set things straight. 

Citing an IDC report, the authors write that “most organizations reported failures among their AI projects, with a quarter of them reporting up to a 50% failure rate.”

Wow. Fifty fucking percent.

That number sounds fishy, so I pulled the report and checked with the author. Here’s the pertinent page:

The first part of the authors’ claim is correct. About 92% of the organizations surveyed by IDC reported one or more AI project failures.

The rest is misconstrued. About 2% of respondents reported failure rates as high as 50%. 21% reported a failure rate of more than 30%.

Most respondents report a failure rate below 30%.

In an ideal world, no AI project would fail. But put that failure rate in context. According to a report from the Project Management Institute, only about 70% of all projects completed in 2017 met original goals and business intent.

In other words, AI projects are no more or less likely to fail than any other IT project.

The authors of the Fast Company piece bloviate for another 11 paragraphs about why AI projects fail. They could have just shifted their eyeballs to the right on the page they misquote, where IDC tabulates the reasons for AI project failure. The top five cited by respondents are, in descending order:

  1. AI technology didn’t perform as expected or as promised
  2. Lacked staff with the necessary expertise
  3. Unrealistic expectations
  4. The business case wasn’t well enough understood
  5. Lack of follow-up from the business units

That first reason needs unpacking. Projects rarely fail because technology does not do what it is supposed to do. Projects fail because the buyer wants something the technology isn’t designed to deliver, or the organization cuts corners on implementation. In most cases, the customer and vendor share responsibility for that failure. The vendor may make misleading or exaggerated claims, the customer may fail to define requirements, or the customer may not perform the necessary due diligence.

It’s easier to blame the technology, though.

AI projects are the same as ERP projects or any other IT project. They succeed or fail based on the organization’s project management processes.

Next time you’re at a trade show when some AI vendor starts braying about their magic bean, do yourself a favor. Move on to the next booth.

11 responses to “Is AI Failing?”

  1. James Ray Avatar
    James Ray

    If anyonone wants to know how bad “journalism” has become, just fact check any statistic in a headline like you did! My data is purely anecdotal, but 100% of the time I dig deeper into the statistical basis for click bait headlines, I find spurious inferences and conclusions.

    1. Thomas W. Dinsmore Avatar
      Thomas W. Dinsmore

      Agreed. And in this case, since the article is obviously planted by the vendor, it’s fake journalism

  2. Zathras Avatar
    Zathras

    The interesting wrinkle with AI projects is that it takes more of an effort to know whether they are really working in production or not. For most production IT projects that break, there is usually a blaring red siren screaming out that there is a problem. Either data does not load, or a visualization is not accessible, or something else of the sort. For many AI projects, what often happens for models in production is that results are still given out; the results are often plausible, but the model is not working the way it was designed. Often, the live data is fundamentally different than the data used to build the model, and so a problem could be detected if one actually checked error rates or other metrics. However, this model monitoring is done far less often than it should, and even when it is, it can take some time to determine the model is not working as designed.

    1. Thomas W. Dinsmore Avatar
      Thomas W. Dinsmore

      If a prediction API goes down, it sets off a red flag. Tools to monitor data drift for models in production are now widely available. They include SAS Model Manager, IBM Watson OpenScale, DataRobot MLOps and many others.

  3. 2 cloud and AI myths you shouldn’t believe – Vectors Code

    […] According to Thomas Dinsmore, “AI projects are no more or less likely to fail than any other IT project.” He goes on to explain in more detail: […]

  4. 2 cloud and AI myths you shouldn’t consider – Technology eMag

    […] In response to Thomas Dinsmore, “AI initiatives aren’t any kind of more likely to fail than another IT mission.” He goes on to clarify in additional element: […]

  5. 2 cloud and AI myths you shouldn’t believe – Onsite Computer Services and IT Support in Denver

    […] According to Thomas Dinsmore, “AI projects are no more or less likely to fail than any other IT project.” He goes on to explain in more detail: […]

  6. https://www.itnewsug.com/wp-content/uploads/2020/01/220cloud20and20ai20myths20you20shouldne28099t20believe.html – IT News UG

    […] According to Thomas Dinsmore, “AI projects are no more or less likely to fail than any other IT project.” He goes on to explain in more detail: […]

  7. Jim Avatar
    Jim

    90% of projects are a failure. AI ( real name Statistics and Probabilities) is a hype and a Joke for people that never designed anything in AI (aka journalist).
    This reminds me of the LIE of Ibm when
    in 2013, They announced in a press release that MD Anderson “is USING the IBM Watson cognitive computing system for its mission to eradicate cancer.” On February 17, 2017, MD Anderson Cancer Center pulled the plug after spending $62 million! for NOTHING

  8. Why AI applications get stalled in the public sector – AI Policy Exchange Avatar
    Why AI applications get stalled in the public sector – AI Policy Exchange

    […] [2] Common causes of AI project failure are further elaborated in a blog post by Thomas Dinsmore here. […]

  9. N3GZ-Nachwuchsnetzwerk Digitale Verwaltung Avatar
    N3GZ-Nachwuchsnetzwerk Digitale Verwaltung

    […] [2] Common causes of AI project failure are further elaborated in a blog post by Thomas Dinsmore. […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.