|January 18 · Issue #71 · View online |
As always happy reading and hacking!
| Alibaba's AI Outguns Humans in Reading Test |
Alibaba has developed an artificial intelligence model that scored better than humans in a Stanford University reading and comprehension test.
| Autonomous Transportation Launching to 125,000 Residents in Florida |
Voyage is bringing self-driving cars to a retirement community; The Villages, Florida. With 125,000 residents, 750 miles of road and 3 distinct downtowns, The Villages is a…
| Do Algorithms Reveal Sexual Orientation or Just Expose our Stereotypes? |
An interesting critical examination of a recent study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017.
| Cloud AutoML: Making AI Accessible to Every Business |
Google introduces Cloud AutoML with the aim to make AI accessible to every business by helping businesses with limited ML expertise start building their own high-quality custom models with advanced techniques like learning2learn and transfer learning from Google.
| The Google Brain Team — Looking Back on 2017 (Part 1 of 2) |
The Google Brain Team looks back at a year of research ranging wrong their efforts to automate machine learning, understanding and generating speech and applying deep learning in computer systems and privacy and security.
| Zero to Hero: Guide to Object Detection using Deep Learning |
An in-depth post delving into object detection and various algorithms like Faster R-CNN, YOLO, SSD.
| How To Create Data Products That Are Magical Using Sequence-to-Sequence Models |
A great tutorial outlining how to summarize text and generate features from Github Issues using deep learning with Keras and TensorFlow. The author makes sure that all his steps are replicable and starts with gathering and cleaning data all the way to an MVP model with reasonable performance.
| The 3 Tricks That Made AlphaGo Zero Work |
There were many advances in Deep Learning and AI in 2017, but few generated as much publicity and interest as DeepMind’s AlphaGo Zero. This excellent post explores three ‘tricks’ behind AlphaGo’s stunning success.
| Fitting Larger Networks Into Memory |
OpenAI released the python/Tensorflow package openai/gradient-checkpointing
, that lets you fit 10x larger neural nets into memory at the cost of an additional 20% computation time.
| A Faster Pytorch Implementation of Faster R-CNN |
This project is a faster pytorch implementation of faster R-CNN, aimed to accelerating the training of faster R-CNN object detection models.
| Deep Reinforcement Fuzzing |
Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, the authors formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows the application of state-of-the-art deep Q-learning algorithms that optimize rewards, which are defined from runtime properties of the program under test.
| Learning to Attack: Adversarial Transformation Networks |
The generation of adversarial networks to attack a deep neural network by either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels is a well studied problem. This fascinating Google paper explores the question whether a separate network can be trained to efficiently attack another fully trained network and demonstrate that it is indeed possible, and that the generated attacks yield startling insights into the weaknesses of the target network.