Roundup 11/28/2016

ICYMI: Top machine learning (ML) and deep learning (DL) stories from last week.


— Google announces a $4.5 million investment in the Montreal Institute for Learning Algorithms. PR avalanche ensues.

— In Investopedia, Richard Saintvilus chronicles the cloud machine learning wars.

— Six months after releasing DSSTNE, an internally developed deep learning framework, AWS announces plans to standardize on MXNet. In Fortune, Barb Darrow reports. It seems like a smart move. MXNet is much more mature than DSSTNE, with a 10X larger contributor base. So I guess this move consigns DSSTNE to the dustbin of history.

Good Reads

— Adrian Colyer summarizes four papers:

  • A report from Stanford University covering key topics in artificial intelligence, including large scale machine learning, deep learning, reinforcement learning, robotics, computer vision, natural language processing, algorithmic game theory, IoT, and neuromorphic computing.
  • Training AI to play a first-person-shooter (FPS) game with deep reinforcement learning.
  • Sequence learning in Google’s Smart Reply feature.
  • Building machines that think and learn like people.

— On the Moor Insights & Strategy blog, highlights from SC16.

Top 20 Python ML Projects

In KDnuggets, Prasad Pore profiles the top twenty open source projects for machine learning with Python. His selection criteria are unclear; he includes TensorFlow, for example, because it has a Python API, but that means Spark ML, H2O and Microsoft Cognitive (CNTK) should make the list as well.

Also, Pore ranks projects by cumulative lifetime commits, a measure that correlates with the age of the project; commits in the past year is a better measure of current activity. Had he used that measure, the top projects would be Spark, CNTK, H2O, TensorFlow, and Theano, in that order, with scikit-learn a distant sixth and Caffe barely registering.

Aside from using an inappropriate measure and excluding important projects, it’s a fine piece of analysis.



— In Fortune, Aaron Pressman argues that Intel’s strategy for machine learning and AI is smart, but lags NVIDIA. Nicole Hemsoth describes Intel’s approach as “war on GPUs.”

— Timothy Prickett Morgan dissects the “Summit” supercomputer on order from the U.S. Department of Energy for its Oak Ridge National Laboratory.

— In Wired, Cade Metz profiles Intel’s chip strategy for machine learning and deep learning. In EETimes, Rick Merritt explains how Intel’s Nervana attacks GPUs.

— Adrian Colyer summarizes Microsoft’s recent paper documenting how they built an application that recognizes conversational speech as well as humans do.

— Google Brain’s Mike Shuster (and two other Googlers) explain “zero-shot translation,” or the ability to translate between language pairs that the system has not seen before.

— Remember the glowing reports about the Clinton campaign’s competitive advantage in advanced analytics and machine learning? Like this and this and this and this and this? As it turns out, the Trump campaign had an advanced analytics operation, too. In the future, data science won’t bestow a competitive advantage to one side or the other in political campaigns. It will be table stakes.

— Sebastian Raschka, the author of Python Machine Learning, opines on programming languages for machine learning. Guess which language he favors.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.