|October 23 · Issue #62 · View online |
Hey and welcome to a new issue,
Happy hacking and reading. As always if you enjoy receiving this newsletter you can help us by sharing it with friends and colleagues.
See you later this week!
| AlphaGo Zero: Learning from scratch |
Deepmind can’t let go of Go and has managed to achieve a remarkable optimization. The latest version doesn’t require initial training from real plays but learns exclusively by playing against itself, starting with a random play.
| Andrew Ng Has a Chatbot That Can Help with Depression |
Stanford University has developed a chatbot that tries to help with depression and anxiety. Although being restricted in it’s abilities, the system seems to work rather good and may lead to some great results.
| Portrait mode on the Pixel 2 and Pixel 2 XL smartphones |
In this detailed article, Google shares how they used a combination of deep learning, special hardware and lots of training data, to achieve the fake bokeh effect in their recently launched flagship phones. Some interesting tidbits included.
| Pornhub is using machine learning to automatically tag its 5 million videos |
Deep learning is being deployed almost everywhere nowadays and now the porn industry has decided to join. Pornhub begins relatively simple with facial recognition to detect actors, but has an ambitious roadmap ahead.
| New Optimizations Improve Deep Learning Frameworks For CPUs |
Intel tries to increase the awareness for the use of CPUs in deep learning frameworks. This article explores recent achievements they made in accelerating e.g. TensorFlow and Caffe on Intel CPU’s and presents the latest processors.
| Word embeddings in 2017: Trends and future directions |
Sebastian Ruder explains the deficiencies of word embeddings, presents a series of approaches that try to solve them and gives insights into current trends as well as upcoming approaches in the field.
| Small Deep Neural Networks - Their Advantages, and Their Design |
A great talk from one of the authors of the famous SqueezeNet paper on how to design neural networks with a small model size. He presents seven different techniques, all tackling different parts of a deep learning model, to reduce your models size and computational requirements.
| How Adversarial Attacks Work |
This in-depth look at adversarial attacks explains how these work, covers non-targeted and targeted variants and even shows how to generate the examples using Torch.
| Attention in Neural Networks and How to Use It |
A very thorough post on the increasingly popular and important research area of neural attention, it surveys the differences and commonalities of several techniques such as hard attention, soft attention, and Gaussian attention.
| Introducing Variational Autoencoders (in Prose and Code) |
This little article introduces you to the world of Autoencoders and explains, how they work, why they work and when you should use one.
| PyTorch Implementation of several Reinforcement Learning Algorithms |
A collection of PyTorch implementations for Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO) and Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR).
| nashory/gans-awesome-applications |
This curated list contains various applications and demos made using Generative Adversarial Networks.
| Swish: a Self-Gated Activation Function |
This paper created some buzz last week and presents a new activation function that’s supposed to excel the good old ReLu function. Although it’s not as new as assumed, Google Brain has done some extensive search and found interesting results.
| Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions |
Trying to solve the usual black box issue that comes up when training neural networks, the authors of this paper augmented their architecture with specific layers that allow inspection during training.
| SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes |
Fascinating system, that manages to further segment and understand indoor scenes, by decomposing a scene into meaningful regions and incorporating an additional oriented boundary network.