|August 3 · Issue #91 · View online |
Hey and welcome to another week in deep learning!
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
Happy reading and hacking!
| Has AI Surpassed Humans at Translation? |
A cogent exposing of some of the shortcoming of neural machine translations, chiefly, the author points out that translation are lacking in reliability (they often miss negation, whole words, or entire phrases), memory (they forget information gained from prior sentences) and common sense (very little external context and knowledge about the world). Douglas Hofstadter recently published an interesting article making similar points
from a linguistic perspective.
| Deep Learning Cracks the Code of Messenger RNAs and Protein-Coding Potential |
Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. The researchers fed a gated neural network training data on both noncoding and messenger RNA sequences, then turned it loose on the data to “learn the defining characteristics of protein-coding transcripts on its own.”
| Empowering Businesses and Developers to do more with AI |
AI is empowerment, and we want to democratize that power for everyone and every business—from retail to agriculture, education to healthcare.
| Google to let you pop its AI chips into your own computer as of October |
Google is finally making TPU hardware directly available. Although the access is restricted for now, Edge TPUs should become available soon and are tailored to be used for inference in IoT devices.
| Differentiable Image Parameterizations |
Great new work from distill.pub explaining the use of differentiable image parametrizations to explore the inner workings of neural networks and create fascinating art along the way. Definitely worth a read!
| Autopsy of a deep learning paper |
An interesting take on the content and quality deep learning papers in general and more specifically a recent paper from Uber AI
. Although quite harsh, the author does prove his points well and gives a nice explanation of his critic.
| What do machine learning practitioners actually do? |
This extensive article examines the day to day life of those scarce machine learning practitioners everyone is trying to hire. Take a look if you want to join!
| Applications of Reinforcement Learning in Real World |
An article aiming to do three things: Investigate the breadth and depth of RL applications in real world, view RL from different aspects, and persuade the decision makers and researchers to put more efforts on RL research.
| Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch |
Implementation of Model-Agnostic Meta-Learning (MAML) applied on Reinforcement Learning problems in Pytorch.
| AutoGraph converts Python into TensorFlow graphs |
AutoGraph converts Python code, including control flow, print() and other Python-native features, into pure TensorFlow graph code, which seems like an awesome new feature given the boilerplate required to rewrite ‘simple’ functions using TF code.
| A tutorial on using Google Cloud TPUs |
Well written tutorial on how and why to use TPUs, including the required setup and any changes you need to introduce to your codebase.
| Motivating the Rules of the Game for Adversarial Example Research |
In this paper, the authors argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, they establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries.
| Progressive Neural Architecture Search |
The authors propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.