Deep Learning Weekly Issue #139
OpenAI adopts PyTorch, Colab goes Pro, TFJS comes to React Native, PyTorch3D, and more...
|Feb 12, 2020||1|
This week in deep learning we bring you Google Colab Pro, AMD GPU acceleration for ONNX, OpenAI switching to PyTorch, autonomous drones using sonar to map underground lakes, and new research from Apple’s self-driving team.
You may also enjoy loads of new data augmentors in the latest release of imgaug, a deep dive into upsampling in Core ML, PyTorch3D, an opinionated guide to ML research, quantifying factors important for reproducibility, machine UN-learning, TensorFlow.js for React Native, and more
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
For $9.99 per month, users get access to faster GPUs, a longer 24 hours runtime limit, and more RAM. Free service is still offered.
Researchers use an autonomous robot to explore and map the Dragon’s Breath cave in Namibia.
AMD has contributed a new backend to ONNX bringing GPU acceleration to those using AMD graphics cards for deep learning.
OpenAI announces they will be transitioning to PyTorch.
Apple’s self-driving car team has a new paper out on lane merging in simulated environments.
Mobile + Edge
An incredibly thorough look at how different implementations of a simple resize operator can make porting models to mobile formats a frustrating process.
TFJS for React Native has officially been released. GPU acceleration provided via a WebGL backend.
New chips for IoT devices provide a large boost in model performance while being more energy efficient.
Facebook open-sources PyTorch tools for working with 3D data (meshes, vertices, etc.).
Results for 164 PyTorch trained ImageNet models. Interesting for comparing various architectures.
A neat writeup of using BERT to generate candidate headlines for a newspaper.
A worthwhile read on how ML researchers should choose topics to work on.
A discussion of factors important for reproducibility based on replicating 255 ML papers.
Libraries & Code
Version 0.4.0 of imgaug is out with a TON of new augmentors.
Google open sources code for FixMatch: A simple method to perform semi-supervised learning with limited data.
The implementation of Mesh R-CNN is based on Detectron2 and PyTorch3D. Builds 3D meshes from 2D images.
A new plotting library from Facebook makes it easier to create interactive plots of things like the impact of hyperparameters on model performance.
Virtual KITTI 2 is a more photo-realistic and better-featured version of the original virtual KITTI dataset.
Papers & Publications
Abstract: ….We introduce SISA training, a framework that decreases the number of model parameters affected by an unlearning request and caches intermediate outputs of the training algorithm to limit the number of model updates that need to be computed to have these parameters unlearn. This framework reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, we may have a prior on the distribution of unlearning requests that will be issued by users....We also validate how knowledge of the unlearning distribution provides further improvements in retraining time by simulating a scenario where we model unlearning requests that come from users of a commercial product that is available in countries with varying sensitivity to privacy....
Abstract: …. In this paper, we propose a new capsule routing algorithm derived from Variational Bayes for fitting a mixture of transforming gaussians, and show it is possible transform our capsule network into a Capsule-VAE. Our Bayesian approach addresses some of the inherent weaknesses of MLE based models such as the variance-collapse by modelling uncertainty over capsule pose parameters. We outperform the state-of-the-art on smallNORB using 50% fewer capsules than previously reported, achieve competitive performances on CIFAR-10, Fashion-MNIST, SVHN, and demonstrate significant improvement in MNIST to affNIST generalisation over previous works.