|December 31 · Issue #69 · View online |
Hey ho everyone,
The deep learning wishes you a happy new year and looks forward to another wild year in deep learning!
| Deep Learning Achievements Over the Past Year |
A thorough look at the recent achievements in deep learning over the past year, covering advancements in NLP such as Deep Structured Semantic Model, WaveNet, prominent successes in computer vision such as Pix2Code or visual reasoning and the realm of possibilities opened up by GANs.
| New Project on the Artificial Intelligence Open Network |
Few-shot distribution learning for music generation refers to the problem of learning a generative model in the few-shot learning regime to generate MIDI sequences or lyrics. Anyone can join the project as a research contributor, it is lead by Hugo Larochelle (firstname.lastname@example.org), Chelsea Finn (email@example.com), and Sachin Ravi (firstname.lastname@example.org).
| Google Maps’s Moat |
In this in-depth comparison of Google’s and Apple’s map products, the author describes how Google intelligently combines data it collects to derive novel and useful byproducts. In the case of maps, Google derives building outlines from it’s satellites imagery and uses deep learning to extract business names and locations
from its Street View imagery and then overlays the two on their map in order to highlight ‘areas of interest’.
| Nigerian AI Health Startup Ubenwa Hopes to Save Thousands of Babies Lives Every Year |
A Nigerian startup has developed a machine learning system to detect child birth asphyxia earlier and hopes to save thousands of babies’ lives every year when its technology is deployed. The founders say the AI solution has achieved over 95% prediction accuracy in trials with nearly 1,400 pre-recorded baby cries.
| Earth to Exoplanet: Hunting for Planets With Machine Learning |
Fascinating application of deep learning to search for planets in NASA Kepler data leading to the discovery of two new planets, and the first 8-planet solar system outside of our own.
| Doing Strange Things with Attention |
This talk gives a clear yet concise overview of attention models in natural language processing.
| Deep Learning and Google Street View Can Predict Neighborhood Politics from Parked Cars |
Researchers from Stanford University have applied deep learning-based computer vision techniques to 50 million images across 200 regions to identify 22 million cars, which is roughly 8 percent of all automobiles in the United States. Based on the types of cars and their locations, the researchers estimated the income, race, education, and voting patterns of the people living in those areas.
| Generative Adversarial Networks (GANs): Engine and Applications |
A survey of Generative adversarial networks (GANs) and its applications.
| An Addendum to Alchemy |
Ali Rahimi and Ben Recht give an addendum to their test of time talk at NIPS 2017
clarifying the the central terms of ‘rigor’ vs 'alchemy’. Rigor denoting chiefly better empiricism, not more mathematical theories and 'alchemy’ rather than being a pejorative, referring to a methodology that produces practical results.
| Juggernaut: An Experimental Neural Network written in Rust |
This is a new project showcasing a neural network written in Rust, a great reference implementation for anyone interested in Rust or in creating a neural network from scratch.
| HistWords: Word Embeddings for Historical Text |
HistWords is a collection of tools and datasets for analyzing language change using word vector embeddings. The goal of this project is to facilitate quantitative research in diachronic linguistics, history, and the digital humanities. historical word vectors in HistWords to study the semantic evolution of more than 30,000 words across 4 languages.
| facebookresearch/MUSE: A library for Multilingual Unsupervised or Supervised word Embeddings |
MUSE is a Python library for multilingual word embeddings, whose goal is to provide the community with:
- state-of-the-art multilingual word embeddings based on fastText
- large-scale high-quality bilingual dictionaries for training and evaluation
| Deep NeuroEvolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning |
This paper offers an astounding result by showing that it is possible to evolve the weights of a DNN with over 4 million parameters with a simple, gradient-free, population-based genetic algorithm (GA) while performing well on hard deep RL problems, including Atari and humanoid locomotion.
| Visual Feature Attribution using Wasserstein GANs |
Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localization to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, the authors discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem they develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation.
| Research Blog: Introducing NIMA: Neural Image Assessment |
In “NIMA: Neural Image Assessment” the Google team introduces a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically). NIMA relies on the success of state-of-the-art deep object recognition networks, building on their ability to understand general categories of objects despite many variations.
| Thinking Out Loud: Hierarchical and Interpretable Multi-task Reinforcement Learning |
Interesting research from Salesforce where they propose a hierarchical policy network which can reuse previously learned skills alongside and as subcomponents of new skills. It achieves this by discovering the underlying relations between skills.