Deep Learning Weekly | Issue #61: Gluon, Seven AI sins, Biased Models, Chinas roadmap, AR Sudoku

Bringing you everything new and exciting in the world of
 deep learning from academia to the grubby depth
 of industry every week right to your inbox. Free.

Hi and welcome to another week in deep learning. We start off this issue with a nice article on the a
October 13 · Issue #61 · View online
Deep Learning Weekly
Hi and welcome to another week in deep learning.
We start off this issue with a nice article on the assumptions and predictions about AI and why you should take most articles with a grain of salt. Next, we learn about Nvidia’s new autonomous driving hardware, bias in models and the China’s ambitious AI roadmap.
To get back to learning and hacking, you may take a look at the Deep RL Bootcamp slides and lectures, look behind the curtains of an AR Sudoku solver and inspect different activation functions. Amazon has revealed a new end-to-end compiler as well as Gluon, a new machine learning framework in cooperation with Microsoft. TensorFlow gained a nice new tool and Keras now includes a tutorial on sequence-to-sequence learning.
As always if you like receiving this newsletter, you can help us by sharing it with your friends and colleagues.

The Seven Deadly Sins of AI Predictions
Nvidia’s new Pegasus AI computer is designed to drive autonomous taxis
Forget Killer Robots—Bias Is the Real AI Danger
China’s AI Awakening
Deep RL Bootcamp
Behind the Magic: How we built the ARKit Sudoku Solver
Visualising Activation Functions in Neural Networks
Libraries & Code
Introducing Gluon: a new library for machine learning from AWS and Microsoft
Introducing NNVM Compiler: A New Open End-to-End Compiler for AI Frameworks
PyTorch implementation of the Quasi-Recurrent Neural Network
TensorFlow Lattice: Flexibility Empowered by Prior Knowledge
A ten-minute introduction to sequence-to-sequence learning in Keras
Papers & Publications
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments
Standard detectors aren't (currently) fooled by physical adversarial stop signs
Detect to Track and Track to Detect