Roundup 11/18/2016

Machine learning (ML) and deep learning (DL) content from the past 24 hours.

SC16 Roundup

The International Conference for High Performance Computing, Networking, Storage and Analysis (SC16) meets this week, which is why we’re seeing so many HPC announcements.

— The folks at The Next Platform capture some highlights:

  • Ben Cotton surveys the program, focusing on topics of interest to programmers: tooling and languages, GPUs and accelerators, and real-world applications.
  • Nicole Hemsoth examines the bi-annual rankings of the top 500 supercomputers and sees some interesting trends. Separately, she investigates six of the newest top 20 supercomputers; assesses Crays’s Pascal XC50 supercomputer, and explains how NVIDIA supports cancer research with its deep learning supercomputers.
  • Timothy Prickett Morgan explains IBM’s AI and HPC push on POWER8 Tesla hybrid servers and surveys Intel’s Xeon processors. Separately, he describes NVIDIA’s Saturn V DGX-1 cluster.

— Separately, Karl Freund reports on NVIDIA’s announcements at the show.

— Paul Alcorn offers additional detail on Intel’s new Xeon CPU and Deep Learning Inference Accelerator.

— Inspur announces the D1000 deep learning appliance based on NVIDIA Tesla GPUs, with support for the Caffe-MPI framework.


— Stephen J. Smith asks what predictive analytics is, then answers the question. It’s a writing trick.

— Julian Ereth explains data virtualization in BI and analytics, but neglects to note that it’s BS.

Methods and Techniques

— Top Kaggleist Darius Barušauskas explains how he won the Red Hat Business Value Competition.

— Ujjwal Karn offers an intuitive explanation of convolutional neural networks.

— Jeefri A. Moka lists 16 questions you should be able to answer when you use linear regression. Unintentionally, he provides 16 reasons to stop using linear regression.

— On the BigML blog, “talvarez” offers the second of six posts on Topic Modeling.

— Vincent Granville introduces you to number theory, for some reason.

— Rubens Zimbres diagrams matrix multiplication in neural networks, reproduced below.



— Charles Babcock elaborates on yesterday’s machine learning announcements by Google.

— Tony Baer asks whether Google Cloud Machine Learning is enterprise ready. He doesn’t answer the question.

— Microsoft’s David Smith explains how to call Microsoft Cognitive Services (CNTK) with R.


— Google and Intel announce a strategic alliance to make Kubernetes and TensorFlow work better with Intel architecture. Alex Konrad reports.

— In the Wall Street Journal, Don Clark reports that next year Intel will start shipping deep learning chips based on technology it acquired through its purchase of Nervana Systems. Stephanie Condon elaborates.


— In TechCrunch, Jeff Kavanaugh describes some potential uses for machine learning in agriculture.

— Berlin-based SearchInk uses machine learning to read and interpret handwritten documents.

— Scott Carey lists eleven business use cases for machine learning.


— In Barron’s, Ian Ing argues that NVIDIA and Xilinx are ready for AI, but AMD is not. Arthur Cole, writing in IT Business Edge, begs to differ, noting AMD’s deal with Google to supply Radeon GPUS.

— Medical startup Enlitic announces that it plans to unveil its deep-learning powered diagnostic products at the Radiological Society of North America meeting in two weeks.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.