ICYMI: Top machine learning (ML) and deep learning (DL) stories from last week.
— In Investopedia, Richard Saintvilus chronicles the cloud machine learning wars.
— Six months after releasing DSSTNE, an internally developed deep learning framework, AWS announces plans to standardize on MXNet. In Fortune, Barb Darrow reports. It seems like a smart move. MXNet is much more mature than DSSTNE, with a 10X larger contributor base. So I guess this move consigns DSSTNE to the dustbin of history.
— Adrian Colyer summarizes four papers:
- A report from Stanford University covering key topics in artificial intelligence, including large scale machine learning, deep learning, reinforcement learning, robotics, computer vision, natural language processing, algorithmic game theory, IoT, and neuromorphic computing.
- Training AI to play a first-person-shooter (FPS) game with deep reinforcement learning.
- Sequence learning in Google’s Smart Reply feature.
- Building machines that think and learn like people.
— On the Moor Insights & Strategy blog, highlights from SC16.
Top 20 Python ML Projects
In KDnuggets, Prasad Pore profiles the top twenty open source projects for machine learning with Python. His selection criteria are unclear; he includes TensorFlow, for example, because it has a Python API, but that means Spark ML, H2O and Microsoft Cognitive (CNTK) should make the list as well.
Also, Pore ranks projects by cumulative lifetime commits, a measure that correlates with the age of the project; commits in the past year is a better measure of current activity. Had he used that measure, the top projects would be Spark, CNTK, H2O, TensorFlow, and Theano, in that order, with scikit-learn a distant sixth and Caffe barely registering.
Aside from using an inappropriate measure and excluding important projects, it’s a fine piece of analysis.
— Timothy Prickett Morgan dissects the “Summit” supercomputer on order from the U.S. Department of Energy for its Oak Ridge National Laboratory.
— Adrian Colyer summarizes Microsoft’s recent paper documenting how they built an application that recognizes conversational speech as well as humans do.
— Google Brain’s Mike Shuster (and two other Googlers) explain “zero-shot translation,” or the ability to translate between language pairs that the system has not seen before.
— Remember the glowing reports about the Clinton campaign’s competitive advantage in advanced analytics and machine learning? Like this and this and this and this and this? As it turns out, the Trump campaign had an advanced analytics operation, too. In the future, data science won’t bestow a competitive advantage to one side or the other in political campaigns. It will be table stakes.