|August 16 · Issue #2 · View online |
welcome to the second issue of deep learning weekly.
A lot has happened this week to pad my reading list. Above all, I was delighted by the awesome tutorials about convolutional and recurrent neural networks.
I also dug a little deeper this week after, Yann Le Cun mentioned Adversarial Networks during his Quora session
last week and included a paper on the topic in this issue.
I hope you find some nuggets to enjoy.
I’d appreciate any feedback, input or question you might have. Of course sharing this newsletter on twitter or facebook goes a long way to support it and would oblige me greatly.
| Why Intel Bought Artificial Intelligence Startup Nervana Systems |
Hardly news at this point. One can imagine multiple uses from deployment in its data centers to a general offering of a machine learning platform.
| The healing power of AI | TechCrunch |
The most interesting point in this techcrunch piece is how what makes deep learning work, namely the absence of an explicitly specified model, might also impede its success in highly regulated fields such as medicine where its black box nature clashes with requirements of explicitness.
| Image Completion with Deep Learning in TensorFlow |
| An Intuitive Explanation of Convolutional Neural Networks – the data science blog |
The best explanation of convolutional neural nets I have come across. For anyone unfamiliar with the topic start here.
| Recurrent Neural Networks for Beginners — Medium |
In a similar vein, this is a solid beginner tutorial on recurrent neural networks.
| Trends in Neural Machine Translation |
An informative overview of developments in the field of neural machine translation.
| François Chollet - Session on Aug 15, 2016 - Quora |
| RE•WORK Interview with Yoshua Bengio - Deep Learning Summit, Boston, 2016 #reworkDL - YouTube |
Interesting Interview with Yoshua Bengio head of the Montreal Institute for Learning Algorithms, at the Boston 2016 Deep Learning summit.
| AMA: We are the Google Brain team. We'd love to answer your questions about machine learning. : MachineLearning |
Reddit AMA with the Google Brain, insight generating questions on the most underrated things going on in deep learning, and interesting info about the background of the Google Brain team ranging from carpentry (seriously) over creative writing to neuroscience and psychology.
| GitHub - aymericdamien/TopDeepLearning: A list of popular github projects related to deep learning |
TopDeepLearning - A list of popular github projects related to deep learning
| Residual Networks of Residual Networks: Multilevel Residual Networks |
This paper proposes a novel residual-network architecture, Residual networks of Residual networks (RoR), to dig the optimization ability of residual networks.
| Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks |
An older paper that deserves renewed attention. It introduces deep convolutional generative adversarial networks, which Yann LeCun mentioned as most promising research area of deep learning in the future. Most excitingly, this technique opens up the opportunity for unsupervised learning, hopefully obsolescing the need for laboriously annotated data sets.
| Convolutional Neural Fabrics |
Despite the success of convolutional neural networks, selecting the optimal architecture for a given task remains an open problem. Instead of aiming to select a single optimal architecture, the authors propose a “fabric” that embeds an exponentially large number of CNN architectures.
| The SIGMORPHON 2016 Shared Task - Morphological Inflection |
| A Convolutional Neural Network Neutrino Event Classifier |
For physics nerds.
There is a good summary of core CNN concepts in the beginning of the paper as well.