|May 9 · Issue #83 · View online |
Welcome back to another tempestuous week in Deep Learning!
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you later this week!
| Statement on Nature Machine Intelligence |
The announcement of a closed-access Nature Machine Intelligence journal has created quite an uproar in the community. To emphasize the role of machine learning in the movement for free and open access to research, more than 2500 researchers have already committed to not submitting to said journal.
| Comparing Google’s TPUv2 against Nvidia’s V100 on ResNet-50 |
RiseML decided to look into Google’s TPUs and attempted an independent comparison against Nvidia’s current flagship, the V100. Both seem to be almost equally fast, but Google seems to win with pricing, which currently allows training ResNet-50 to 76.4% on ImageNet for about $73.
| Google Cloud Platform announces new credits program for researchers |
Google is giving away 5000$ in GCP credits to researchers from certain regions. If you’re eligible, why not give it a spin? You may apply here
and there is an FAQ
page available as well.
| MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward? |
2018 seems to be the benchmarks and there is a new addition to the lineup. A group from academia and industry, including Google, Baidu, Intel, AMD, Harvard, and Stanford among them – released MLPerf, a nascent benchmarking tool “for measuring the speed of machine learning software and hardware.”
| Join the fastest growing Deep Learning Developer Community |
Deep Learning Studio is Free, Open and no-coding platform. Developers, Researchers, & Students love this platform. Try it out (It’s Free)!
| How I Fail - Ian Goodfellow |
Very interesting interview with Ian Goodfellow covering major setbacks and rejections and how he managed to overcome during his career. He gives some general advice and shares thoughts on the role of failure in machine learning.
| Weight decay vs L2 regularization |
Some interesting details regarding the differences (or non-differences) between weight decay and L2 regularization and what it means for your code or implementation.
| Stochastic Weight Averaging — a New Way to Get State of the Art Results in Deep Learning |
This article discusses two interesting recent papers that provide an easy way to improve performance of any given neural network by using a smart way to ensemble.
| Announcing PyTorch 1.0 for both research and production |
Facebook decided to merge Caffe2, their production focused deep learning framework into PyTorch, which is especially popular among researchers as it allows quick iterations using dynamic graphs. Combining both, Facebook hopes to make the transition from research to actual production easier.
| Introducing Swift For TensorFlow |
Following the announcement at the TensorFlow Dev Summit, Google has open sourced Swift for TensorFlow
. They chose an interesting approach of extending the language to suit TensorFlow specific needs. For more details, check the corresponding technical details
| Tacotron 2 - PyTorch implementation with faster-than-realtime inference |
| AI Safety via Debate |
OpenAI is working on improving AI safety by training agents to debate topics with one another. In the end, a human judges who is the winner. They’re outlining this method together with preliminary proof-of-concept experiments and are also releasing a web interface so people can experiment with the technique.
| DeepMind papers at ICLR 2018 |
Between 30 April and 03 May, hundreds of researchers gathered in Vancouver, Canada, for the Sixth International Conference on Learning Representations. Here you will find details of all DeepMind’s accepted papers.