|October 4 · Issue #9 · View online |
After last weeks packed issue, our hopes for another exciting week were fulfilled. Google released lots of fascinating papers and data, Amazon finally offers GPU heavy instances and many great articles on interesting topics were published.
Have a great read on all these treats and see you next week!
As always if you enjoy receiving this newsletter, please considering sharing it with friends and colleagues, your support is very much appreciated.
| New P2 Instance Type for Amazon EC2 – Up to 16 GPUs |
Amazon has noticed the need for GPU resources and makes a new instance type with 1, 8 or 16 Nvidia K80s available.
| Image Compression with Neural Networks |
Using a new type of RNN units, the Residual Gated Recurrent Unit, Google managed to make interesting advances in image compression.
| CNTK, Microsoft’s open source deep learning toolkit, now available on GitHub |
Microsoft released their Computational Neural Network Toolkit on GitHub.
| Announcing YouTube-8M: A Large and Diverse Labeled Video Dataset for Video Understanding Research |
Google released the largest video dataset to date. Containing 8 million Youtube videos and 1.9 billion frame-level features. To obtain such a large dataset they used their Inception-v3 Model on one frame per second for each video.
| Introducing the Open Images Dataset |
Google seems to be on a run and adds 9 million annotated images to their open datasets.
| Deep Learning Research Review Week 1: Generative Adversarial Nets |
A great introduction to Generative Adversarial Networks which seem to be the hot stuff these days.
| A Primer in Adversarial Machine Learning – The Next Advance in AI |
William Vorwies explains the vulnerability of CNNs against noise and unusual samples and presents Generative Adversarial Networks as a way to overcome these issues.
| Hyper Networks |
Having a network generate the weights for a larger network is a great idea and the results seem quite promising.
| Graph Convolutional Networks |
A nice overview on recent developments in the field and a close look on two recent papers.
| subpixel: A subpixel convnet for super resolution with Tensorflow |
An implementation of the recently released paper on super resolution, based on the TensorFlow framework.
| Implementation of Reinforcement Learning Algorithms |
A repository containing code, exercises and solutions for popular Reinforcement Learning algorithms.
| GitHub - Zeta36/tensorflow-tex-wavenet |
Samuel Graván extended the existing WaveNet implementation and turned it into a text generator.
| Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation |
Google presented a new system for machine translation that uses a deep LSTM network and an attention model to translate text. The system beats existing approaches and is already deployed for Chinese to English translation in Google Translate.
| Semantic Parsing with Semi-Supervised Sequential Autoencoders |
A new semi-supervised approach for sequence transduction applied to semantic parsing.