|November 7 · Issue #64 · View online |
Hey and welcome to our latest issue on all things deep learning!
As always if you like receiving this newsletter, you can help us by sharing it with your friends and colleagues.
| 'We can't compete': why universities are losing their best AI scientists |
This rather worrying article covers the hiring spree of the big five and it’s impact on academia worldwide. Being offered five times their current salary, PhD students drop everything they’re doing and move to the private sector. We’ll see where this goes, but the conclusions drawn in the article seem daunting.
| AutoML for large scale image classification and object detection |
Google has altered the AutoML approach to tackle the common COCO and ImageNet challenges using an automatically generated neural network. The result, NASNet, surpasses all previous Inception models and even matches the best unpublished result that’s currently reported. To top it off, A NASNet implementation is available within the tensorflow/models repository
, fully equipped with pretrained weights.
| iris |
An interesting combination of Jupyter notebooks and a machine learning SaaS, that allows deploying your notebooks instantly. Looks great for fast prototyping of your latest models.
| A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |
Pieter Abbeel, a Berkeley professor, is part of the team that has started Embodied Intelligence with the intent on bringing a new level of robotic automation to the world’s factories, warehouses and perhaps even homes.
| Andrew Ng Says Enough Papers, Let’s Build AI Now! |
At the recent AI Frontiers Conference, Andrew Ng tried to convince the attendees, that now is the time to actually apply deep learning and build systems instead of focusing on research.
| How-To: Multi-GPU training with Keras, Python, and deep learning |
This well-written tutorial shows you how to master the art of multi-GPU training on your machine. Although there is quite a lot of work is needed upfront, the reward of a nearly linear speedup in training time seems worth it.
| Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow |
This extensive repository contains a Mask R-CNN implementation, pretrained weights and very detailed exploration notebooks. Great if you want to inspect one of your own models.
| TensorFlow meets PyTorch with Eager execution |
With the rise of PyTorch and it’s direct execution model, more and more people wanted such an execution in their already familiar TensorFlow environment. This has now been added with so called Eager execution and will
| Uber Open Sources Pyro, a Deep Probabilistic Programming Language |
Pyro is a tool for deep probabilistic modeling, supposed to unify the best of modern deep learning and Bayesian modeling. The goal of Pyro is to accelerate research and applications of these techniques, and to make them more accessible to the broader AI community.
| Pretrained ConvNets for pytorch: ResNeXt101, ResNet152, InceptionV4, InceptionResnetV2, etc. |
A nice collection of pretrained networks, all implemented in PyTorch. Although the main goal of this repository is making the reproduction of research results easier, it comes with an API that allows easy access to all the models as well.
| Juggernaut: Neural Networks in a web browser |
This interesting demo showcases the use of Rust and Web Assembly to run a neural network right in your browser, without the need for a heavy backend.
| The (Un)reliability of saliency methods |
This paper inspects saliency methods and demonstrates, why they don’t work as advertised using multiple examples.
| WESPE Project |
Great project from the ETH Zurich, that applies photo enhancement by learning a transformation from the camera that was used to take the photo, to an image that was taken with a professional DSLR. This is especially aimed at mobile phones, although the cameras used there become more and more advanced.
| Efficient Processing of Deep Neural Networks: A Tutorial and Survey |
An overview paper that summarizes different hardware architectures for DNN processing and highlights the trade-offs between them.