Roundup 11/7/2016

Top machine learning (ML) and deep learning (DL) stories from last week, plus some new stuff from Friday and the weekend.

The Next Platform publishes my three-part series on the state of enterprise machine learning. Part one is here. Part two is here. Part three is here.

ICYMI: Top Stories of Last Week

— IDC predicts worldwide spending on cognitive systems and AI will grow from $8 billion in 2016 to more than $47 billion in 2020, an annual growth rate of 55%. Separately, IDC projects spending on advanced and predictive analytics software to expand to ~ $4 billion by 2020. Meanwhile, Markets and Markets forecasts revenue from Machine Learning as a Service (MLaaS) to reach $3.8 billion by 2021.

— In MIT News, Larry Hardesty summarizes an approach to making ML output more transparent and interpretable. Additional reports herehere, and here.

— Graphcore, a UK-based startup that develops accelerated chips for ML, lands $30 million in an “A” round. Linkapalooza here.

— Gil Press offers twelve observations from the O’Reilly AI Conference.

— Alex Woodie profiles RISELab, the next thing at Berkeley after AMPLab.

— Adit Deshpande explains generative adversarial nets, a deep learning architecture that performs well for image classification.

— Redis Labs releases Redis-ML, an open source machine learning package that works with Spark. It’s pretty thin functionally, but Redis claims it’s fast.

— On the Databricks blog, Tim Hunter et. al. explain GPU acceleration in Databricks.

— Boston-based DataRobot delivers a new release with TensorFlow support plus enhancements that improve interpretability and operational deployment.

— Jean-Francois Puget argues that ML is a form of mathematical optimization, which seems accurate. What’s interesting is that there are so few optimization algorithms in use: stochastic gradient descent, conjugate gradient, L-BFGS and a few others. Master optimization and you master ML.

— In a surprising development, Kyoto University Graduate School of Medicine determines that a dual-socket Xeon system outperforms a NVIDIA K40 GPU when training deep neural networks for computational drug discovery using the Theano framework.

Good Reads

— In the Wall Street Journal, Irving Wladawaky-Berger asks: has AI finally reached a tipping point?  In the process of answering the question, he identifies six hot trends in AI, including large-scale machine learning, deep learning, and reinforcement learning.

— Tanushri Chakravorty explains how machine learning works.

— Adrian Colyer summarizes a paper by Armbrust et. al. on scaling Spark. The paper covers the DataFrames API, which is relevant to machine learning because all new developments in Spark’s machine learning library use DataFrames.

— Andrew Hillis, a Ph.D. candidate at Harvard University, publishes a paper on the use of machine learning to predict worker productivity.

Methods and Techniques

— Jason Brownlee explains how to implement the learning vector quantization algorithm in Python.

— George Sutphin et. al. introduce WORMHOLE, a tool for genomics.

— In a slideshow, Ryan Francis presents six machine learning misunderstandings.

Applications

Health and Medical

— Xiaoyong Pan et. al. use deep learning to predict RNA-protein binding sites.

— Researchers at the Fraunhofer Institute for Medical Image Computing in Germany use deep learning to detect tumors in CT and MRI scans.

— In MIT Technology Review, Tom Simonite describes an application that learns the molecular structure of drugs and suggests new structures.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s