|June 22 · Issue #45 · View online |
It’s been an exciting week again, so let’s head right in:
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you next week!
| The challenge of verification and testing of machine learning |
Ian Goodfellow and Nicolas Papernot explain why and how one may verify his machine learning models to ensure robust behavior in production. The post accompanies their Cleverhans library, which contains common attacks on machine learning models for easy verification.
| Tesla hires deep learning expert Andrej Karpathy to lead Autopilot vision |
Andrej Karpathy is joining Tesla in a key Autopilot and computer vision role. Seems like he is replacing Chris Lattner from Apple, who joined in January.
| MultiModel: Multi-Task Machine Learning Across Domains |
Google has managed to create a neural net that’s capable of solving tasks across multiple domains simultaneously. This includes image recognition, translation and speech recognition.
| ML notes: Why the log-likelihood? |
Machine learning is about modeling: you experience something and you wonder afterwards if you could have predicted it, or even better, if you can build something that could have predicted it for you…
| General Game Playing with Schema Networks |
Vicarious introduced the Schema Network, a generative graphical model that can simulate the future and reason about cause and effect. We demonstrate the benefits of this kind of reasoning for game playing and show an adaptability not seen before in other agents.
| iOS 11: Machine Learning for everyone |
Matthijs Hollemans presents iOS 11s new machine learning APIs and covers supported ops, available models, limitations, performance and much more in his article.
| Supercharge your Computer Vision models with the TensorFlow Object Detection API |
TensorFlow has gained a new extension that offers an easy to use API for training, testing and deploying object detection models.
| Building a scalable foundation for deep learning |
A nice collection of papers and literature one may read to create a foundation for further deep learning work. This includes mathematical and conceptual foundations, neuroscience as well as information theory.
| Accelerating Deep Learning Research with the Tensor2Tensor Library |
Google has released an open-source system for training deep learning models in TensorFlow. It facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible.
| Atcold/pytorch-CortexNet |
Pytorch implementation of CortexNet.
| Prototypical Networks for Few-shot Learning |
This paper proposes prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
| Programmable Agents |
A new paper from DeepMind about agents that develop disentangled interpretable representations that allow them to generalize to a wide variety of zero-shot semantic tasks.
| Attention Is All You Need |
This paper presents a new architecture that avoids recurrence and replaces it with a new attention mechanism. A good explanation can e found