|April 18 · Issue #37 · View online |
We hope you’ll enjoy the articles as much as we did and would appreciate if you share this newsletter with your friends and colleagues. If you want to keep track of the upcoming news, take a look at our Twitter account
where we share the latest announcements.
Thanks and see you next week!
| Augmented reality powered by deep learning and computer vision |
Facebook has shown off their abilities in augmented reality at their annual developer conference. They’ve put their Mask R-CNN to good use and can do impressive stuff while keeping computational requirements low enough to run everything on mobile devices. Presumably thanks to their new Caffe 2 framework
. We’re very excited for the upcoming announcements and will keep you up to date!
| Google’s Dueling Neural Networks Spar to Get Smarter, No Humans Required |
This interview with Ian Goodfellow sheds some light on the quite amusing history of GANs which involves an argument in a bar and some a few beers. Goodfellow talks about use cases and the challenges he faced when trying to train this new invention.
| Teaching Machines to Draw |
Once again Google shares some great insights on their research results. This time they explored if machines are able to learn to draw and generalize abstract concepts. Or in other words: They created a pig-drawing model that draws pig-like trucks when fed with a truck. Which can be proven using cat-pig math obviously.
| EmojiIntelligence: Neural Network built in Apple Playground using Swift |
Want to bring some variety into your Python coding habits? Why not teach a neural net to infer emojis from your drawings and dive into Swift while doing so? Very well done little project by Bilal Reffas.
| One-shot Learning with Memory-Augmented Neural Networks |
Rylan Schaeffer takes a detailed look at one of last years papers on one-shot learning. He shared his initial impression in a previous post
and has since then discussed his thoughts with the authors. This new post tries to explain the paper and his concerns in tandem.
| Caffe 2 |
Facebook has released Caffe2
, their new deep learning framework. They promise a lightweight, modular and scalable library, include an impressive model zoo (thanks to existing Caffe models) and want you to use it for every possible purpose.
| All Code Implementations for NIPS 2016 papers |
A collection of repositories that contain implementations for papers that where published at the NIPS conference in 2016. There are some real gems in there, so you should definitely take a look!
| Implementation of BEGAN in TensorFlow |
Google Brains ‘Boundary Equilibrium Generative Adversarial Networks’ represent the state of the art in realistic face generation and this repository contains a TensorFlow based implementation.
| Federated Learning: Collaborative Machine Learning without Centralized Training Data |
When you have apps deployed to millions of devices, it seems like a great idea to put these to good use. But Google has taken it a step further and uses it’s keyboard app to perform continuous training of deep learning models. This is training with gradient updates coming from distributed mobile phones, amazing!
| MAD-GANs |
An introduction to a multi-agent GAN architecture thats able to capture diverse modes of the true data distribution. The page accompanies the recent paper and gives a high level overview of the idea.
| MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications |
This paper from Google presents a new kind of architecture based on depth-wise separable convolutions that focuses on light weight deep neural networks. The authors explore different variations and try to balance accuracy and computational requirements. The results sound great and TensorFlow models will follow.
| A Neural Parametric Singing Synthesizer |
The authors present a new model for singing synthesis based on a modified version of the WaveNet architecture. Don’t forget to listen to the corresponding sound samples
| The Reactor: A Sample-Efficient Actor-Critic Architecture |
A new reinforcement learning agent, called Reactor (for Retrace-actor), based on an off-policy multi-step return actor-critic architecture. The agent uses a deep recurrent neural network for function approximation.