|July 20 · Issue #49 · View online |
Hi and welcome to a new week in deep learning!
We hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with your friends and colleagues.
See you next week!
| Building a 50 Teraflops AMD Vega Deep Learning Box for Under $3K |
Just in time for AMD’s recent 1.0 release of their machine intelligence library
, this article takes a look at the Ryzen platform and new Vegas GPUs and their potential for deep learning. Looks very promising and it seems like there finally is some competition in sight for Nvidia.
| Producing flexible behaviors in simulated environments |
DeepMind has managed to achieve impressive results in training agents with simulated bodies to move across various terrains while jumping across gaps or walls. And while doing so, they created some great GIFs as well.
| Revisiting the Unreasonable Effectiveness of Data |
Google has taken training on large amounts of data a step further and increased the usual ImageNet size by a factor of 300. They evaluated networks trained on 300 million images from an internal dataset and came to the conclusion that huge amounts of data do lead to the expected performance increase.
| Elon Musk: Artificial Intelligence Is the Greatest Risk We Face as a Civilization |
A heated discussion about Elon Musk’s recent words on Artificial Intelligence at the National Governor’s Association last Saturday. Contains a nice collection of viewpoints on the ‘dangerous AI’ scenario and the overall perception of machine learning.
| The Future of Deep Learning |
Francois Chollet shares his vision of the future of deep learning. He expects models to become more similar to computer programs resembling our mental models and those will be assembled automatically from specialized subroutines. We’ll see if he is right, but his insights are definitely worth a look. And while you’re at it, why not peek at Francois previous article on the limitations of deep learning
| Deep Learning Project |
This awesome tutorial, written by Harvard graduate Spandan Madan, covers a full machine learning pipeline in a single notebook and shows you how to classify movie genres using Keras. He starts with basics, covers data collection and preprocessing, gives a quick introduction to deep learning and finishes with deep models for visual and textual data. Take a look to get up to speed in no time at all!
| Improving the Realism of Synthetic Images |
Apple has started a new machine learning journal
and kicks off with a very detailed article on data generation. The article explains how they managed to make synthetically generated images more realistic using an adversarial discriminator network and covers training, hyperparameters as well as the use of a history of generator samples.
| Robust Adversarial Examples |
OpenAI created images that reliably fool neural network classifiers when viewed from varied scales and perspectives challenging a claim from last week
that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
| PAIR-code/facets: Visualizations for machine learning datasets |
The recently announced PAIR initiative has released an impressive tool, that allows exploring large datasets using powerful visualizations. For more details, take a look the corresponding blog post
| Distral: Robust Multitask Reinforcement Learning |
This paper presents a new approach for joint training of multiple tasks, which is referred to as Distral (Distill & transfer learning). Instead of sharing parameters between the different workers, the authors propose to share a “distilled” policy that captures common behavior across tasks.
| Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting |
The authors of this paper that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks and show techniques that lead to up to 33% faster convergence.