|July 14 · Issue #48 · View online |
Howdy and welcome to another issue of deep learning weekly!
Happy hacking and reading. As always if you enjoy receiving this newsletter you can help us by sharing it with friends and colleagues.
See you next week!
| PAIR: the People + AI Research Initiative |
An interesting Google initiative with the goal to “focus on the "human side” of AI: the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive.“ new tools and educational material on human-centered machine learning.
| Microsoft creates an AI research lab to challenge Google and DeepMind |
Microsoft has created a new research lab with a focus on developing general-purpose artificial intelligence technology.
| AI is Changing How We Do Science |
A tour of how deep learning is being employed to make progress in different sciences; whether it is physicists making sense of data from particle accelerators or using GANs to smooth out noisy pictures of galaxies, helping biologists find variants of genes causing autism
, or chemists synthesize target molecules deep neural networks have become an indispensable tool in these noble pursuits.
| When Not to Use Deep Learning |
This illuminating post first does away with a number of misconceptions outsiders often have about deep learning of which typical straw men about when not to use deep learning are born (e.g. not enough data). Some real reasons cited include not having the necessary level of commitment to bear the substantial computational cost and time investments and the need to interpretability (especially for causal models).
| TensorFlow Neural Machine Translation Tutorial |
A comprehensive tutorial by Google Research which aims to give readers a full understanding of seq2seq models and to show how to build a competitive translation model from scratch in TensorFlow.
| Introduction to Pointer Networks - FastML |
Pointer networks are a variation of the sequence-to-sequence model with attention. Instead of translating one sequence into another, they yield a succession of pointers to the elements of the input series. The most basic use of this is ordering the elements of a variable-length sequence or set.
| Unintuitive Properties of Deep Neural Networks - Slides to talk by Hugo Larochelle |
Slides to a talk given by Hugo Larochelle at the 2017 Deep Learning School in Montreal the video of which is unfortunately not yet available. The talk list a number of unintuitive properties of deep neural networks which we do not yet fully understand and contains links to the papers exploring these points:
- They can make dumb errors
- They are strangely non-convex
- They work best when badly trained
- They can easily memorize
- They can be compressed
- They are influenced by initialization and first examples, yet they forget what they learned.
| NeuroNER: Named Entity Recognition Using Neural Networks |
NeuroNER is a comprehensive library for achieving state-of-the-art results in named entity recognition. Extensive documentation and demo videos can be found on the project’s homepage here.
| On-Device Machine Learning with New Mobike SDK by Clarifai |
Computer vision startup Clarifai launches a mobile SDK which promises developers access to the entire suite of Clarifai’s image recognition solutions both online and offline. Early access can be requested here
| Checkerboard Artifact Free Sub-Pixel Convolution |
The authors propose an initialization method for sub-pixel convolution known as convolution NN resize in order to combat the common problem the presence of checkerboard artifacts in output images and dense labels.
| Opportunities and Obstacles for Deep Learning in Biology and Medicine |
Harkening back to the Nature article above this extensive paper examines applications of deep learning to a variety of biomedical problems such as patient classification, fundamental biological processes, and treatment of patients to predict whether deep learning will transform these tasks or if the biomedical sphere poses unique challenges. The authors conclude that even though significant improvements on the prior state of the art have been made deep learning has yet to revolutionize or definitively resolve any of these problems.