|June 27 · Issue #88 · View online |
Hey and welcome to another week in deep learning!
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
You may have noticed, that we’re currently running on a bi-weekly schedule due to various reasons, but hope to bring the newsletter back to the normal pace soon. To make up for the delays, we just added a few more links to this weeks issue.
Happy reading and hacking!
| OpenAI Five |
After AlphaGo, OpenAI has moved on to tackle Dota 2 and trained a team of five neural networks to fight other human teams. They’ll join a professional match in August and we’re excited to see results. As always, the community is already discussing the required resources
| Productionizing ML with Workflows at Twitter |
After almost a year of silence, Twitter returned with two blog posts from their Machine Learning teams. One describing their transition from Torch to TensorFlow
and this one describing how they make ML available internally using their Workflows tool.
| Introducing improved pricing for Preemptible GPUs |
Joining the overall trend to lower GPU pricing, Google just announced they’re reducing the price for preemptible GPUs on the Google Cloud Platform.
| Speech Synthesis as a Service |
This helpful post provides a list of use cases, an introduction to the Speech Synthesis Markup Language (SSML) and a comparison of four services that includes sample output, sample code and the results of a small study.
| From 2D to 3D Photo Editing |
In this article by one of your trusty curators, an image editing startup shares their successes in depth inference for portrait imagery and where they see the role of depth in image editing.
| Learning from humans: what is inverse reinforcement learning? |
Have you ever heard of inverse reinforcement learning? Here, one tries to infer the reward function by observing the agent, which in theory would allow learning from humans. The linked article gives a very nice introduction and covers all important aspects.
| Understanding Deep Learning for Object Detection |
This blog post explores important work in deep learning for object detection. It explains how those methods evolved over time and compare their differences and similarities.
| How to Develop a Deep Learning Photo Caption Generator from Scratch |
If you’re interested in photo captioning, this tutorial will show you how to do so with Keras, including training data preparation, designing and training a model and evaluating your model afterward.
| Tensorflow: The Confusing Parts |
A Google AI resident shares a guide to TensorFlow which he wishes he had been given when starting with TensorFlow a while ago. Although targeting users with existing experience with TensorFlow, it explains very fundamental concepts like graphs and the TensorFlows behavior within the Python in an easily understandable way. Definitely recommended, if you do anything with the framework.
| Introducing Apex: PyTorch Extension with Tools to Realize the Power of Tensor Cores |
Nvidia keeps accelerating the most common frameworks and just announced an extension that speeds up PyTorch on Volta GPUs.
| Facebook open sources DensePose |
Facebook has made DensePose, their 3D pose estimation system, available to the public. It’s implemented in their Detectron library in Caffe2 and all related information can be found at densepose.org
| NLP-progress: Repository to track the progress in Natural Language Processing (NLP) |
A new ‘progress’ repository for the NLP field aiming to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art across the most common NLP tasks and their corresponding datasets.
| Papers with Code: The latest in machine learning |
A great new portal linking new papers and their implementations, which should make verifying and replicating results much easier.
| Taskonomy: Disentangling Task Transfer Learning |
This paper won the best paper award at CVPR2018. The authors studied twenty-five different visual tasks to understand how & when transfer learning works from one task to another, reducing demand for labeled data. The project website can be found here
| On Calibration of Modern Neural Networks |
Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The authors discover that modern neural networks, unlike those from a decade ago, are poorly calibrated.