|March 28 · Issue #78 · View online |
Hey and welcome to another week in deep learning!
Happy reading and hacking!
If you like receiving this newsletter and would like to support our work, you can do so by sharing this issue with friends and colleagues who might find it interesting. Thanks!
| NVIDIA Transforms the Workstation for the Age of Deep Learning |
Kicking of GTC 2018, Nvidia has once again released a new GPU for us. With 32GB VRAM and 118.5TFLOPS ‘deep learning performance’ we definitely have something to dream of.
| The Linux Foundation launches a deep learning foundation |
Founded by Baidu, Huawei, Nokia, Tencent and others, the new foundation aims to support open source innovation in deep learning and to make these new technologies available to data scientists everywhere.
| IBM wants to open up the deep learning expertise bottleneck |
Instead of providing more power, IBM has decided to reduce the amount of computation and cost required to train models using their cloud services.
| Nvidia debuts new Drive Constellation simulated self-driving test system |
The system uses two servers, one simulating a driving car in a realistic environment, the other performing the actual driving by reading data and sending out commands, to train automated driving systems at a huge scale. A very interesting approach, which will hopefully reduce the dangers of real-world testing
| Deep Learning Studio 2.0 at NVIDIA's GPU Conference (sponsored) |
Open, Free and No-coding deep learning platform is getting an upgrade. Check out what is coming next (Hint: If are a developer then you will love us more).
| Predicting physical activity based on smartphone sensor data using CNN + LSTM |
A nice introduction on how to interpret smartphone sensor data using CNN and LSTM. Implemented in Keras and should easily get you started on the topic.
| Intuitively Understanding Variational Autoencoders |
Great and very detailed explanation of Variational Autoencoders. This is for you if you always wanted to know where all these creepy faces come from.
| Learning to write programs that generate images |
DeepMind gave their robots a brush and taught them to recreate paintings. Initially done in a simulated environment the results look especially impressive when done by a robotic arm.
| Reviewing Criteria |
Something a little different, but nonetheless interesting: A look behind the scenes of academic paper reviews. Colin Raffel has shared his criteria when doing so.
| Choosing a Deep Learning library for developing and deploying your App/Service |
Well written review of the available deep/machine learning frameworks. Covering the most important ones, the article gives recommendations on which framework to use for certain use cases.
Similar to the previous article, but focused on running neural networks in your browser using WebGL. The four largest frameworks are covered and key aspects are explained.
| Model Grader by Fritz |
Interesting tool that allows grading your model for complexity, CoreML compatibility and theoretical runtime on an iPhone X. Sadly limited to Keras models right now, but might come in handy.
| YOLOv3: An Incremental Improvement |
A not entirely serious paper, but contains some interesting thoughts on recent developments, publications and applications. And the detector looks great as well!
| HALP: High-Accuracy Low-Precision Training · Stanford DAWN |
Stanford researchers found a way to achieve high accuracy while training low-precision. They incorporate a new technique named ‘bit centering’ and achieved pretty nice results.
| Neural Network Quine |
This paper describes how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights.