|September 13 · Issue #6 · View online |
there are a lot of gold nuggets in this week’s issue; we have had our minds blown by WaveNet, have been regaled with a fantastic tutorial by members of the Google brain team and have been made to ponder fundamental deep learning theory by an intriguing advance from the realm of physics…
I hope you enjoy those reads as much as we did, and if you like this newsletter please consider sharing it with a friend or colleague,
| Intel buying chipmaker Movidius to boost artificial-intelligence efforts |
Following last week’s tiny Deep Learning supercomputer announcement, Movidius is now part of Intel.
| The Next Wave of Deep Learning Architectures |
This article sheds some light on the current architectures used in the field and presents a new approach by Wave Computing.
| Deep Learning in 2016: Tech Giants Move to Share Data - Dataconomy |
Informative overview of the recent moves of tech giants to share data and tools and the rationale behind it.
| Apple is trying to turn the iPhone into a DSLR using artificial intelligence | The Verge |
Looks like Apple’s iBrain is already generating results. We will be able to check them out for ourselves in a few days.
| WaveNet: A Generative Model for Raw Audio | DeepMind |
DeepMind is using Deep Learning directly on soundwaves without using a spectral analyzer as a first step. It is amazing that it works at all and mind-blowing that it works as well as it does. Listen to some of the generated speech and music samples.
| Attention and Augmented Recurrent Neural Networks — Distill |
A great introduction to the use of attention in RNNs by members of the Google Brain team. They introduce you to the four key concepts of augmented RNNs, which seem to play a major role in the future.
| The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe |
Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.
| Deep Learning Reading Group: Deep Networks with Stochastic Depth |
Ingenious idea: randomly bypass network layers to counteract diminishing gradients, feature reuse & long training times, Related paper: https://arxiv.org/abs/1603.09382
| Andrej Karpathy (Research Scientist at OpenAI) - Session on Sep 8, 2016 - Quora |
Includes an interesting answer to what he thinks is underdeveloped in ML development: “I haven’t found a way to properly articulate this yet but somehow everything we do in deep learning is memorization (interpolation, pattern recognition, etc) instead of thinking (extrapolation, induction, etc).”
| GitHub - danijar/mindpark: Testbed for deep reinforcement learning algorithms |
mindpark - Testbed for deep reinforcement learning algorithms
| Generating Videos with Scene Dynamics - MIT |
The authors capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction)…
| Discrete Variational Autoencoders |
The authors introduce a novel class of probabilistic models, comprising an undirected discrete component and a directed hierarchical continuous component, that can be trained efficiently using the variational autoencoder framework.
| Stacked Approximation Regression Machine manuscript withdrawn |
Author’s note on the withdrawal of the SARM paper that has been featured here.