|June 9 · Issue #43 · View online |
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you next week!
| Apple CoreML |
Apple is joining Google and will start offering new and additional machine learning APIs in iOS 11. These allow easy integration of existing models, by converting them into Apples open source CoreML model format, as well as new layer types and graph construction APIs. They give developers access to lots of functionality they already implemented (e.g. face landmarks and image alignment), bring their Core ML framework to the Mac and even announced official support for external GPUs, but all announcements were limited to AMD GPUs.
| Two Big Reasons Why Google's AI Chips Will Have A Tough Time Competing With Nvidia |
Some interesting thoughts on why Google may not be a real competition to Nvidia and their new business. The article brings two main arguments, vendor lock-in and the cloud restriction of Googles recently announced TPUs, while explaining why Nvidia seems to be well ahead of the competition.
| The Machine Intelligence Behind Gboard |
Google shares some insights on what powers it’s smart keyboard. They cover their use of LSTMs and explain, how they managed to create training data and how their model is able to fluently switch between languages.
| Our Road to Self Driving Victory |
comma.ai shows off their open sourced their self driving car software and describes their roadmap. Some interesting details on how data is collected and what they’re currently capable of.
| Kaggle Past Competitions |
An extensive and searchable index of all past Kaggle competitions, links to solutions, blog posts and related articles. If you ever face a new problem, take a look here to find out if it has been saved already. And with a bit of luck, you’ll even find out how.
| Applying deep learning to real-world problems |
Merantix shares three insights on applying deep learning to real-world problems. They cover pre-training (how to do it, what to use), label distributions in the real-world vs. academia and model understanding. If you want to use machine learning in production, this is definitely a great start.
| The $1700 great Deep Learning box: Assembly, setup and benchmarks |
Slav Ivanov got so frustrated with the holy cloud, he decided to build his own deep learning rig. He explains his purchase decisions and gives detailed instructions on how to put it all together, including the software. In the end, he wraps up with some benchmarks of his new machine.
| Safe Crime Prediction |
A deep dive into machine learning based crime prediction, that uses deep learning and homomorphic encryption to maintain privacy. Very interesting read and some valuable insights on how to apply crime prediction in a privacy friendly way.
| A neural approach to relational reasoning |
Reasoning - drawing logical conclusions about how physical objects, sentences, or even abstract ideas are related to one another - is still a key challenge in artificial intelligence. DeepMind explains two of his latest papers that try to perform such a task using neural networks.
| reiinakano/xcessiv: |
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.
| The Cramer Distance as a Solution to Biased Wasserstein Gradients |
In this paper, the authors describe three natural properties of probability divergences that reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. They provide empirical evidence suggesting that this is a serious issue in practice.
| ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models |
A little older, but this paper shows off Facebooks ActiVis tool, that allows inspection of large-scale neural networks.
| A simple neural network module for relational reasoning |
One of the papers mentioned in the DeepMind blog article about relational reasoning.