|November 30 · Issue #17 · View online |
this week’s issue is again chock full of awesome tutorials, papers and OS projects, whether human activity recognition with LSTM networks, visualization of embeddings with TensorBoard, image super-resolution using GANs or an awesome example of transfer learning using a Keras model to tune a Theano neural network.
Happy reading and hacking.
As always we appreciate you sharing this newsletter with your friends and colleagues.
| Graphcore secures $30m in funding to accelerate AI |
Graphcore comes out of stealth mode and announces $30m in funding to accelerate AI and machine learning.
| Google, Facebook, and Microsoft Are Remaking Themselves Around AI |
Artificial intelligence is not only reshaping the technology these tech giants use but how they organize and operate their businesses.
| Keras model tuning with Theano Neural Network (Transfer Learning) |
This article is a comparison between Keras & Theano, it also covers advanced techniques like transfer learning & fine tuning.
| Long paper review of Attend, Infer, Repeat: Fast Scene Understanding with Generative Models | The Information Age |
Paper review from the 2016 NIPS. The paper is about an algorithmic efficient inference in structured image models that explicitly reason about objects. Remarkably the model learns by itself to find and choose the appropriate number of inference steps.
| Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System |
Google switched to a new neural translation system an end-to-end learning framework that learns from millions of examples, and provided significant improvements in translation quality. This post details how they tackled the challenge of scaling up to all the 103 supported languages.
| MIT's deep-learning software produces videos of the future |
When you see a photo of a dog bounding across the lawn, it’s pretty easy for us humans to imagine how the following moments played out. Well, scientists at MIT have just trained machines to do the same thing. See research paper below
| The Future Of Artificial Intelligence | Demis Hassabis - DeepMind Founder - YouTube |
| TensorBoard: Embedding Visualization |
TensorBoard comes with a built-in visualizer, called the Embedding Projector, for interactive visualization and analysis of high-dimensional data like embeddings.
| GitHub - guillaume-chevalier/LSTM-Human-Activity-Recognition: Human activity recognition using TensorFlow on smartphone sensors dataset and an LSTM RNN |
LSTM-Human-Activity-Recognition - Human activity recognition using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six categories (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING).
| GitHub - buriburisuri/speech-to-text-wavenet: Speech-to-Text-WaveNet : End-to-end sentence level English speech recognition based on DeepMind's WaveNet and tensorflow |
speech-to-text-wavenet - Speech-to-Text-WaveNet : End-to-end sentence level English speech recognition based on DeepMind’s WaveNet and tensorflow
| GitHub - tkipf/gcn: Implementation of Graph Convolutional Networks in TensorFlow |
gcn - Implementation of Graph Convolutional Networks in TensorFlow
| Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network |
How do we recover the finer texture details when we super-resolve at large upscaling factors? In this paper, the authors present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR) claiming to be the first framework capable of inferring photo-realistic natural images for 4x upscaling factors.
| Deep Learning Book by Ian Goodfellow, Yoshua Bengio, Aaron Courville now available on Amazon |
The comprehensive book is now available for pre-order on Amazon. This is the ideal book if you want to get a solid grounding in deep learning fundamentals.
| Generating Videos with Scene Dynamics |
The authors introduce a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background.