|March 14 · Issue #32 · View online |
We hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with your friends and colleagues.
See you next week!
| Kaggle Joins Google Cloud |
Kaggle was acquired by Google, which seems like a quite obvious purchase in hindsight. It will remain a separate brand though and we will hopefully see even more competitions on the platform.
| Intel buys driverless car technology firm Mobileye |
Intel keeps acquiring talent and knowledge in the field and the deep learning war continues. With just Nvidia and Intel remaining, we’ll probably see some more fighting, but maybe we can benefit through faster and cheaper hardware?
| Inside Facebook’s AI Machine |
An extensive Interview with Facebook’s director of engineering for applied machine learning, Joaquin Candela, on his career, the role of AI at Facebook and how it has become an essential tool for the majority of engineers and systems there.
| Big Basin: Facebooks next-generation AI hardware |
Facebook announces a new server architecture for deep learning applications. This will offer more GPU capatabilities and the specifications are publicy available.
| Netflix uses AI in its new codec to compress video scene by scene |
Netflix is applying machine learning techniques to the well known buffer problem by applying different compression techniques depending on the current scene. Not much technical details, but interesting application.
| IBM attempts to win back speech recognition crown: your move, Microsoft! |
IBMs Watson and Microsoft keep going in their head to head race in speech recognition. And it looks like Watson has taken the lead.
| Deep Learning on Title + Content Features to Tackle Clickbaits |
A detailed article explaining how to use deep neural nets to detect clickbait. Comes with Keras code and is a very interesting read. Give it a try!
| Introducing Keras 2 |
After two years Keras is making a big step forward and introduces version 2.0.0. This new release mainly focuses on API stability and promises long term support, while also preparing for the TensorFlow integration that will come in Tensorflow 1.2.
| google/sentencepiece |
Google open sourced a text tokenizer that can be used to tokenize and detokenize texts in order to feed them into neural networks.
| google/seq2seq: A general-purpose encoder-decoder framework for Tensorflow |
This framework can be used for Machine Translation, Text Summarization, Conversational Modeling, Image Captioning, and more.
| VQA: Visual Question Answering |
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
| Multi-step Reinforcement Learning: A Unifying Algorithm |
This paper studies a new multi-step action-value algorithm called Q(σ) which unifies and generalizes existing algorithms while subsuming them as special cases.
| Controllable Text Generation |
This paper aims at generating plausible natural language sentences, whose attributes are dynamically controlled by learning disentangled latent representations with designated semantics.
| How these researchers tried something unconventional to come out with a smaller yet better Image… |
A well-written introduction to “Fully Convolutional Networks” which were introduced in 2014. Covers the main aspects of the paper and gives a detailed explanation of the accompanying codebase.