|January 11 · Issue #70 · View online |
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you next week!
| Deep Learning Hardware Limbo |
Tim Dettmers has shared his view on the current state of deep learning hardware and why you may want to wait a few months before making your next purchase. He takes a look at the three major players, Nvidia, AMD and Intel and tries to outline their next steps. Especially the arguments around the need for community and software seem very strong and support the drawn conclusions. Learn what lies ahead, who is winning, and why.
| AI in 2018: Google seeks to turn early focus on AI into cash |
This finance-focused article revisits Google’s deep learning history and tries to find out, if all the upfront investments made during the last years, actually pay out. And while the different software features are an interesting find, the hardware part and the role of machine learning in Googles data centers is especially interesting.
| This $16,000 robot uses artificial intelligence to sort and fold laundry |
Moving away from serious topics like hardware and data centers, a company has revealed it’s sort and fold robot. It supposedly uses deep learning, 3D, and image analysis to understand the piece of clothing you fed it, folds it and sorts it according to your rules. Although it’s got some flaws, we personally hope for more and cheaper robots in this field.
| Gradient descent vs. neuroevolution |
A great look at a new optimization technique called ‘neuroevolution’, which was recently introduced by both OpenAI and Uber AI Research. The article starts off with a great explanation of the optimization problem that’s solved when training neural networks and follows through with a comparison between the new neuroevolution and the conventional gradient descent approach. Very interesting and extremely well written.
| 30 Amazing Machine Learning Projects for the Past Year |
This overview collected 30 different projects across all common fields based on their GitHub ratings. A great starting point if you are looking for a head start into something specific or just need a quick inspiration on what to tackle next.
| From deep learning down |
Leaning on the hype term of deep learning, Gene Kogan dives into some math history and visits the some of the most fundamental inventions, which we take for granted today.
| facebookresearch/House3D: a Realistic and Rich 3D Environment |
This new dataset that Facebook recently made available, consists of thousands of indoor scenes equipped with a diverse set of scene types, layouts, and objects. Great for training agents to navigate through houses, but may also be usable for general segmentation or depth inference, as all labels can be extracted easily.
| Release of TensorFlow 1.5.0 |
TensorFlow 1.5.0 is on the horizon and the first Release Candidate was made public a few days ago. As always, lots of bugfixes and improvements, but also a first preview of TensorFlow Lite and eager execution are included.
| dlib C++ Library: A Global Optimization Algorithm Worth Using |
This article takes a look at some sophisticated global optimization algorithms and shows how to use a new function in dlib for hyperparameter optimization.
| Adversarial Spheres |
Ian Goodfellow et al. further investigated adversarial examples by studying a simple synthetic dataset of classifying between two concentric high dimensional spheres. They conclude that the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed.
| Panoptic Segmentation |
This paper introduces a new task titled panoptic segmentation. It’s a combination of conventional semantic segmentation and instance segmentation, which leads to a multitude of new challenges. To augment research the authors conducted a human study, collected datasets, and developed a corresponding metric.
| Neural Speed Reading via Skim-RNN |
An interesting paper that tries to apply the principles of skimming to a recurrent neural networks. The RNN decides on it’s own, which hidden state needs to be updated, which allows balancing the tradeoff between speed and accuracy.