Top machine learning (ML) and deep learning (DL) stories from last week, plus some new stuff from Friday and the weekend.
NVIDIA announces financial results for Q3:
- $2 billion in revenue, up 54% from last year
- Gross margin of 59%
- Earnings per share up 89%
The company’s shares are up 30% since the announcement COB Thursday.
Reporting in Forbes, Aaron Tilley notes the 300% growth in NVIDIA’s data center business, driven by strong sales of Pascal and Tesla GPU-accelerated chips. According to CEO Jen-Hsun Huang, NVIDIA’s deep learning platform “runs every AI framework, is available in cloud services from Amazon, IBM, Microsoft and Alibaba and in servers from every OEM.” In Fortune, Aaron Pressman explains the role of machine learning in NVIDIA’s growth. He quotes Huang trash-talking Intel’s attempts to accelerate machine learning with FPGA chips.
The tech and financial press go wild. Linkapalooza here.
ICYMI: Top Stories of Last Week
Methods and Techniques
— Jason Brownlee explains how to implement a backpropagation algorithm in Python. This article is part of a series on his Machine Learning Master site. Other recent explainers include articles on implementing learning vector quantization, perceptrons, logistic regression with stochastic gradient descent and linear regression with stochastic gradient descent.
— In a slideshow, Ryan Francis presents six machine learning misunderstandings.
— Researchers at Google DeepMind tweak a deep learning algorithm so it can recognize images and other things from a single example.
— Sebastian Ruder surveys gradient descent optimization algorithms. If that sounds esoteric, read the article; optimization is the DNA of machine learning.
— Matthew Mayo explains GPU-based parallelism and the CUDA framework.
— In a webinar, Intel delivers an overview of its software accelerators for deep learning, including the Math Kernel Library, the Data Analytics Acceleration Library, and the Deep Learning SDK. Slides here.
— Shohei Hido describes Chainer, an open source framework for deep learning.
— Toshiba announces the development of a Time Domain Neural Network that uses an extremely low power consumption neuromorphic semiconductor circuit for deep learning. The technology has the potential to bring deep learning to edge devices, such as sensors and smartphones.
— Libby Kinsey interviews GraphCore CTO Simon Knowles, who explains how to build a processor for machine learning.
— Nimbix uses NVIDIA Pascal GPUs in its HPC cloud service, while Microsoft Azure uses GPUs based on the old Kepler and Pascal architectures.
— Researchers at the Fraunhofer Institute for Medical Image Computing in Germany use deep learning to detect tumors in CT and MRI scans.
— In MIT Technology Review, Tom Simonite describes an application that learns the molecular structure of drugs and suggests new structures.
— Researchers from the Swiss Federal Institute in Zurich use ML to measure gender bias in astronomy. First, they build a model to predict citations based on characteristics of a paper (without considering the gender of the lead author.) Then, they used their model to determine the expected number of citations for a set of papers with female lead authors.
— James Anderson summarizes a study that machine learning may be able to diagnose autism from genetic data.
— On the Google Research blog, Malay Haldar et. al. explain how they built a deep neural network to better understand searches on Google Play.
— Fangping Wan et. al. use deep learning to identify compound-protein interactions in computer simulations to understand how drugs work.
— Researchers at Oak Ridge National Laboratory use deep learning running on the Titan supercomputer to extract knowledge from cancer pathology reports.
— Scientists at Hanyang Institute in Korea develop a deep learning system for more accurate blood pressure readings.