|August 30 · Issue #93 · View online |
Hey and welcome to a new week in deep learning!
Happy reading and hacking!
If you like receiving this newsletter and would like to support our work, you can do so by sharing this issue with friends and colleagues who might find it interesting. Thanks!
| Fake America great again |
Here, the author started experimenting with the well known GAN-based ‘OpenFaceSwap’ utility and quickly realized it’s potentials. He explores the potential dangers of faked footage and warns about it’s risks especially today, where the possibilities may not be known to everyone.
| Nvidia RTX Announcement Highlights AI Influence On Computer Graphics |
Looks like it’s already time for a new generation of GPUs and this time Nvidia focused on one of the more extreme challenges, ray tracing, which might lead to nice performance boosts for AI related tasks as well.
| Humans grab victory in first of three Dota 2 matches against OpenAI |
The first match of OpenAIs Dota 2 bots resulted in a defeat, although it looks like that was actually not that surprising to the team developing and tuning the system.
| Deep Learning and 'Hyper-Personalization' are the Future of Marketing Automation |
While not too surprising the different use cases of deep learning in marketing automation are nonetheless interesting. This article covers three examples, pattern recognition for more sophisticated personalization, increased retention through better automation, and more advanced prescriptive analytics.
| Research Assistant in Speech Recognition with Machine Learning |
The research group for Human-Information Interaction at the Zurich University of Applied Sciences (Winterthur, Switzerland) has an open position for a researcher in the area of speech recognition. You are going to work on challenging problems like audio classification and spoken term detection in various languages and dialects, all with the help of ML and deep neural networks.
| NLP’s Generalization Problem, and How Researchers are Tackling it |
Survey shining a light on pervasive generalization problem plaguing NLP models and the easy and sometimes stupid ways that NLP models can break. It shows that what is learned are often superficial correlation that do not allow for any compositionality or generalization.
| Visualizing Gradient Descent with Momentum in Python |
Using nice visualizations, this post explains why gradient descent with momentum beats the vanilla algorithm when the loss surface is raven-like.
| Use Kaggle to start (and guide) your ML/ Data Science journey |
Well written post that arguments that Kaggle might be a very nice way to get into the data science / machine learning field. By focusing on essential skills, real-world problems and thanks to a large community, Kaggle seems like a nice place to start dipping your toes into the new world.
| Face detection - An overview and comparison of different solutions |
If you’re on the hunt for a proven face detection API, this article has got you covered. This article tests the top five vendors and compares speed, pricing and success rate. Some interesting finds, looking at you Microsoft, included.
| lucid: A collection of infrastructure and tools for research in neural network interpretability |
Tensorflows Lucid, one of the largest tool, notebook and doc collection around neural network interpretability got a large update. If you want to get to know your model better, this is a great point to start.
| Code for "Aggregated Momentum: Stability Through Passive Damping", Lucas et al. 2018 |
Code for the recent paper “Aggregated Momentum: Stability Through Passive Damping”, Lucas et al. 2018, for both Pytorch and TensorFlow.
| Video-to-Video Synthesis |
Generating impressive results, this project studies the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.
| Contextual Parameter Generation for Universal Neural Machine Translation |
The authors propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation.
| Neural Arithmetic Logic Units |
Neural networks enhanced with Neural Arithmetic Logic Units (NALU) can learn to track time, perform arithmetic over images of numbers, translate numerical language into real-valued scalars, execute computer code, and count objects in images.