|July 28 · Issue #50 · View online |
Hello and welcome to the 50th issue of Deep Learning Weekly.
This week Google launched an AI focused venture fund, we got deep learning on a stick, and a startup making the self-driving car race more suspenseful.
If you’d like to join us in celebrating this milestone and help us publish 50 more issues of Deep Learning Weekly, tell a friend about us.
| Qualcomm Opens its Mobile Chip Deep Learning Framework to All |
Mobile chip maker Qualcomm releases its Neural Processing Engine SDK to developers via their developer network. This opens up a lot of potential for AI computing on a range of devices, including mobile phones, in-car platforms and more.
| Beijing Wants A.I. to Be Made in China by 2030 |
A new plan from the top of the Chinese government calls for the country to become a powerhouse in artificial intelligence in just over a decade. The country laid out a development plan on Thursday to become the world leader in A.I. by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion.
| Deep Learning on a USB Stick |
Another interesting development making deep learning on devices more accessible. This ultra-low power VPU stick makes it possible to add visual intelligence and machine learning capabilities to battery-powered products such as autonomous drones, or intelligent security cameras with no need for a network connection or a cloud backend.
| Former Tesla Engineers Launch Startup to Bring Mapping and Self-Driving Data to Rest of the Industry |
Former Tesla and iRobot engineers launch IvI5
to bring crowdsourced high precision maps to other auto makers:
“Yes, our goal is to bring HD mapping to the entire industry. Most of the cars out there are not Teslas, so if we want to truly make our roads safer, all OEMs need to have access to these maps.”
| Google Creates an AI Venture Fund to Invest in AI Startups |
Google’s new Gradient Ventures
fund focuses on AI companies and aims to provide not only capital, but also access to AI experts and bootcamps in AI.
| Technical Debt in Machine Learning |
Great post exploring the often neglected problem of technical debt in machine learning systems. The author presents three types of technical debt:
- Feedback Loops: you’re ML system is fed data that it generated itself, improving performance metrics without actually improving the system. Fix lies in proper exploitation / exploration calibration.
- Correction Cascades: If you apply too many fixes and heuristic corrections to your ML system as a result you are no linger able to properly train your system as a whole.
- Hobo-Features: useless features that are hard to get rid of, e.g. a feature that gave a minor performance boost but has become neutral once more data was collected.
| Text Classifier Algorithms in Machine Learning |
A good overview over the main approaches of text classifier algorithms and their use cases.
| Challenges in Deep Learning |
Deep Learning powered AI systems comes with complex difficulties and hurdles. This post discuss the most prominent challenges in Deep Learning.
| New fast.ai Course: Computational Linear Algebra |
fast.ai has a great track record of putting out high quality ML classes and this seems to be another golden nugget in that series. There is an online textbook
as well as a lecture video playlist.
| Research Blog: An Update to Open Images - Now with Bounding-Boxes |
Last Google introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories. Now they are releasing an update which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels making it easier to train models for object classification and detection.
Example of bounding boxes from Open Images corpus
| Decoding the Enigma with Recurrent Neural Networks |
The author uses a Recurrent Neural Networks (RNNs) to approximate a function for decoding the Nazi Enigma. Fun stuff!
| DeepMind: Agents that Imagine and Plan |
Very cool research by DeepMind on imagination-augmented agents which feature an ‘imagination encoder’ neural network which learns to extract any information useful for the agent’s future decisions, but ignore that which is not relevant.
| Ian Goodfellow's answer to What is Next for Deep :earning? - Quora |
Very informative answer by Ian Goodfellow, every point outlines a direction into which Deep Learning will likely expand.
| DeepMind: Going Beyond Average for Reinforcement Learning |
Another cool piece of DeepMind research on how modeling the full reward variation instead of simply the average makes reinforcement learning algorithms much more robust against random permutations.
| 37 Reasons Why Your Neural Network is not Working |
Some solid advice for the frustrating situation that every DL practitioner has found themselves in; everything looks fine during training but your test shows that your net is outputting garbage.
| SimGAN-Captcha: Solve Captcha Without Manual Labeling a Training Set |
Implementation of SimGan showing that using a captcha synthesizer and a refiner trained with GAN, it’s feasible to generate synthesized training pairs for classifying captchas.
| A Memory-Efficient Implementation of DenseNets |
A memory-efficient implementation of the DenseNets featured below.
| CVPR2017 Best Paper Awards |
| Machine Teaching: A New Paradigm for Building Machine Learning Systems |
An interesting paper on reaching machine learning. The authors articulate fundamental machine teaching principles. Specifically they describe how, by decoupling knowledge about machine learning algorithms from the process of teaching, innovation could be accelerated and millions of new uses for machine learning models engendered.
| Densely Connected Convolutional Networks |
It has been shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. The authors of this paper take this insight to its logical conclusion by introducing the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion which comes with advantages such as alleviating the vanishing-gradient problem, strengthening feature propagation, encouraging feature reuse, and substantially reducing the number of parameters.
| Learning from Simulated and Unsupervised Images |
This paper aims to reduce the gap between real and synthetic training images by introduces a new method called Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simulator.The authors develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. An example if this training method can be found in the SimGAN repo linked to above.