|September 5 · Issue #94 · View online |
Welcome to a new week in deep learning!
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you next week!
| An Insider's Look Into The Summer School Training The World's Top AI Researchers |
Very interesting look behind the scenes of the annual “CIFAR deep learning summer school” in Toronto. After covering the most important lectures and social events, this article even gives interesting insights into the history, internals and selection process of the event.
| Easy-to-make videos can show you dancing like the stars |
Want to dance like a professional ballerina or strut like a rapper? A new machine-learning technique can transfer one person’s motion to another in a simple process.
| ANZ bank unpicking neural networks in effort to avoid dangers of deep learning |
Now that the “Neural Networks solve everything” hype seems to be gone, more pragmatic approaches come to light. Here one of the largest banks in Australia explains how they apply deep learning and what the issues and risks are in a banking environment.
| Amazon Rekognition Mistook Congressmen for Criminals? A Closer Look |
This article examines the ACLU’s claims of racial bias in face recognition technology and takes a look at Amazons reaction, as well as the overall media reaction to the claims.
| Reinforcement Learning: A Comprehensive Introduction |
This is the start to a series on reinforcement learning which aims to explain everything from the ground up. Three parts are up and offer a very detailed and extensive introduction. Definitely recommended!
| How to explain gradient boosting |
The goal of this article is to explain the intuition behind gradient boosting, provide visualizations for model construction, explain the mathematics as simply as possible, and answer thorny questions such as why GBM is performing “gradient descent in function space.”
| Uncertainty for CTR Prediction: One Model to Clarify Them All |
The folks at Taboola combine the stuff they learned in their last three blog posts into a single model which is able to handle all three types of uncertainty in a principled way.
| Learning where you are looking at (in the browser) |
| Gibson Environment |
A team at Stanford has open sourced Gibson, an environment for real-world perception learning. In contrast to other projects, Gibsons data is based on the real-world which should allow better transfer learning.
| Lazydata: Scalable data dependencies for Python projects |
A little outside of the main field, but definitely quite interesting for machine learning practitioners. This package allows tracking and caching remote resources (e.g. your huge dataset on S3) locally in your repository. All in a reproducible and git compatible fashion.
| Forecasting earthquake aftershock locations with AI-assisted science |
Harvard and Google teamed up to see if we could apply deep learning to explain where aftershocks might occur and managed to make good progress on the topic. Although the final model is still imprecise, they were able to identify physical quantities that may be important in earthquake generation.
| Deep Exemplar-based Colorization |
This paper achieved amazing colorisation results because of a custom architecture, separate processing of the luminance channel, and loss calculations on the Chrominance with a custom error metric.
| Wasserstein is all you need |
The authors propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding). This is made possible by the use of optimal transport, which allows them to build these associated estimates while harnessing the underlying geometry of the ground space.