|October 13 · Issue #61 · View online |
Hi and welcome to another week in deep learning.
As always if you like receiving this newsletter, you can help us by sharing it with your friends and colleagues.
| The Seven Deadly Sins of AI Predictions |
This great article takes a look at some of the assumptions and predictions about the future of AI and tries to explain how they emerge and where they come from. It boils down to seven common patterns, which should be easy to identify and may let you take the next AI horror story with a grain of salt.
| Nvidia’s new Pegasus AI computer is designed to drive autonomous taxis |
Nvidia has extended their Drive PX with a new system made specifically for fully autonomous driving. It packs four Volta GPUs on a board that has the size of a license plate and is supposed to vastly reduce the energy consumption of such systems.
| Forget Killer Robots—Bias Is the Real AI Danger |
Back at the ‘evil AI’ topic, John Giannandrea, AI lead at Google, is more worried about machine learning systems that learn human prejudices. He demands transparency regarding the training data that was used to create a certain model and emphasizes the importance of inspecting potentially biased models before putting them into production.
| China’s AI Awakening |
An interesting article on Chinas goal of becoming the world leader in AI by 2030. Given the efforts and enormous investments made by the government, the article goes as far as recommending to copy these actions in order to go ‘all in’ on artificial intelligence.
| Deep RL Bootcamp |
This page offers all slides and lectures that were held at the Deep Reinforcement Learning Bootcamp earlier this year. The bootcamp covered basics, common algorithms and strategies as well as hands on labs where everything was applied using OpenAI’s Gym environment. If you want to get started, this seems like a really great course to head right in.
| Behind the Magic: How we built the ARKit Sudoku Solver |
In this part of their article series, the authors of a `Magic Sudoku` app explain how they trained a neural net to solve Sudokus on mobile phones. The series covers many aspects of creating a mobile app, but this part explains their deep learning journey in detail. Great read!
| Visualising Activation Functions in Neural Networks |
A helpful little tool to quickly inspect and plot an activation function. Covers all common ones and even outputs helpful aspects of the functions.
| Introducing Gluon: a new library for machine learning from AWS and Microsoft |
Microsoft and Amazon have teamed up and released Gluon, a new machine learning framework that’s tightly coupled with both companies backends and supposedly scales to up to 500 GPUs. Should be worth a look, if you plan to use either Mxnet or Microsofts Cognitive Toolkit.
| Introducing NNVM Compiler: A New Open End-to-End Compiler for AI Frameworks |
To conquer the ever growing landscape of deep learning front- and backends, Amazon has announced a specialized compiler for AI frameworks. This compiler bridges between different frameworks and plattforms and should allow a more general approach to AI deployment.
| PyTorch implementation of the Quasi-Recurrent Neural Network |
Salesforce has open-sourced their implementation of Quasi-Recurrent neural networks
in Pytorch. These recurrent neural networks work similar to LSTMs, but this implementation is up to 17x faster than such a network implemented using cuDNN.
| TensorFlow Lattice: Flexibility Empowered by Prior Knowledge |
Google has published a new tool, that allows estimating multivariable functions. The tool creates lattices, look up tables that estimate your desired function, and allows you to generate e.g. a loss function, that matches your expectations, without actually having to generate all neccessary datapoints.
| A ten-minute introduction to sequence-to-sequence learning in Keras |
Francois Chollet has added a new brief tutorial on how to implement sequence to sequence models using his Keras framework.
| Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments |
This paper casts the problem of continuous adaptation into the learning-to-learn framework.
| Standard detectors aren't (currently) fooled by physical adversarial stop signs |
The authors of this paper managed to disprove the theory that models may be fooled into overseeing stop signs after the signs were physically manipulated. They even go as far as concluding, that such adversarial examples aren’t actually possible.
| Detect to Track and Track to Detect |
An interesting approach that handles object tracking and detection within a single model, beats last years ImageNet results and at the same time, stays relatively simple.