|May 12 · Issue #40 · View online |
Happy reading and hacking.
As always, if you’d like to support our newsletter recommend us to your friends and colleagues.
| Facebook Engineering: A Novel Approach to Neural Machine Translation |
RNNs are the incumbent technology for text applications such as machine translation and NLP, while CNNs they offer the advantage of computational efficiency (parallelizable) and the ability to capture complex relationships through hierarchical information processing. This is why FAIR started researching CNNs for machine translation, the fruit of that research is a state-of-the-art CNN based model that is nine times faster than traditional strong RNN models.
| Why UX Design For Machine Learning Matters |
This article explores the role of user experiences in the ongoing machine learning trend and explains, why systems need to be transparent in order to be accepted by users.
| Using Deep Learning at Scale in Twitter’s Timelines |
This post explores Twitter’s timeline ranking algorithm is powered by deep neural networks, leveraging the modeling capabilities and AI platform built by Cortex.
| Microsoft Launches a New Service for Training Deep Neural Networks on Azure |
Microsoft today announced the launch of Azure Batch AI Training, a new service for batch training deep neural networks on the company’s Azure cloud computing platform. The service is now available for private beta.
| AIY Projects: Do-it-yourself AI for Makers |
Google announced a new program that offers ready-to-go kits to makers/developers which allow them to augment their creations with artificial intelligence and deep learning. The first product is a voice kit, which will allow easy access to voice recognition.
| Sorting 2 Metric Tons of Lego |
If you’re still wondering whether deep learning has any real use cases, you may have found the answer: Jacques Mattheij has managed to build a pretty decent Lego sorter using deep learning and was surprised to find how the results of one afternoon of experiments with VGG-16 beat all his previous classifiers by a large margin.
| Physiognomy’s New Clothes |
An extensive look at ‘scientific racism’ thats introduced by biases embedded in deep learning models. These biases are present in the human behavior used for model development and may be amplified through the 'laundering’ of such data. Quite lengthy, but definitely worth a read.
| Deep, Deep Trouble |
This article showcases some recent achievements in deep learning and focuses especially on their impact on existing image processing methods, such as denoising. The article states that carefully crafted algorithms were replaced with large, but simple neural nets and tells an image processing researchers view on this trend.
| aaron-xichen/pytorch-playground |
This repository contains base pre-trained models (e.g. AlexNet, VGG16, ResNet, Inception etc.) and datasets (SVHN, CIFAR10 etc.) in PyTorch and is supposed to be an easy entry point for beginners, as well as a sample for different quantization techniques to further reduce model size.
| TensorFlow Benchmarks |
The TensorFlow team has put together an impressive suite of benchmarks for TensorFlow on different architectures and for different models. Includes the used scripts, an extensive performance guide and details on the methodology.
| Facebook AI Research Sequence-to-Sequence Toolkit |
The complete code of the FAIR Sequence-to-Sequence Toolkit linked to above.
| Visual Attribute Transfer through Deep Image Analogy |
Once again, style transfer has been taken to the next level. This paper presents an approach, that allows transfer of arbitrary attributes (e.g. Pandora ears) between images.