|December 14 · Issue #68 · View online |
after a full week of NIPS madness, we worked through all the news, gossip and papers and present you our selection.
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you next week!
| All the buzz at AI’s big shindig |
Experiencing the NIPS without being involved into the world of deep learning seems to be a rather daunting experience. The article describes history and current state of the conference, as well as some of the publications, projects and hardware shown by the attendees.
| Artwork Personalization at Netflix |
Engineers shared insights on their next goal in personalization at Netflix. While it seems straightforward to optimize the recommended titles
for each user, generating and delivering personalized artwork to increase interaction is pretty impressive. This article describes the accompanying challenges and the found solutions in great detail,
| Learning with Privacy at Scale |
After constantly repeating the importance of privacy, Apple has published a paper
describing how the approach of differential privacy works and how it’s implemented across their systems. This shortened version allows a quick introduction to the concept and may finally make you feel safe when typing away on your iPhone.
| Deep Learning for NLP, advancements and trends in 2017 |
This article goes through some advancements for NLP in 2017 that rely on DL techniques. Javier Couto shares some of the works that he liked the most this year. The use of deep learning in NLP keeps widening, yielding amazing results in some cases, and all signs point to the fact that this trend will not stop.
| RMNIST with annealing and ensembling |
Michael Nielsen has made further experiments with his reduced MNIST
dataset and explored the use of simulated annealing, an ensemble of networks and L2 loss. He describes his results and some interesting findings in great detail and you may find some valuable tips in there.
| Turi Create simplifies the development of custom machine learning models |
Another announcement from NIPS was Apples machine learning tool ‘Turi Create’. This python based framework allows training of a multitude of models on text, images, audio, video, and sensor data using a variety of algorithms. Your final model can then easily be exported to CoreML for use in an iOS app. Interestingly the underlying neural network library seems to be mxnet. Take a look and train something fun!
| TFGAN: A Lightweight Library for Generative Adversarial Networks |
To ease experimentation with GANs, TensorFlow now includes a library that provides simple function calls to cover the majority of GAN use-cases. This should get your model running on your data with just a few lines of code. Don’t forget to look at the accompanying tutorial
, if you’re interested.
| NVIDIA/pix2pixHD: Generating and manipulating 2048x1024 images with conditional GANs |
The code for pix2pixHD, a new method for synthesizing high-resolution photo-realistic images from semantic label maps, has been released by Nvidia.
| NIPS 2017 Proceedings |
If you could not make it to NIPS 2017 and really want to know what was presented, here you go. This is the list of all papers that made it. Definitely some gems in there, but quite overwhelming.
| Machine Learning for Systems and Systems for Machine Learning |
Google Brain Lead Jeff Dean shared details about the progress being made in machine learning hardware. This includes details on Google hardware like the TPU units and how they were applied, as well as a look into the future of hardware. He then moves on to other possible applications, e.g. the index structures from the link below, datacenters, compilers and other tools.
| The Case for Learned Index Structures |
Google released a paper showing how machine-learned indexes can replace B-Trees, Hash Indexes, and Bloom Filters. Execute 3x faster than B-Trees, 10-100x less space and even run on the GPU.
| Grouping-By-ID: Guarding Against Adversarial Domain Shifts |
When training a deep network for image classification, one can broadly distinguish between two types of latent features that will drive the classification’; “core" features whose distribution does not change substantially across domains and “style" or “orthogonal” features whose distribution can change substantially across domains. These latter orthogonal features would generally include features such as position or brightness but also more complex ones like hair color or posture for images of persons. The authors develop a novel method based on a causal framework to guard against future adversarial domain shifts by constraining the network to just use the ”core" features for classification.
| Born Again Neural Networks |
Nice work from one of the workshops at NIPS. By distilling a teacher model to a student model with an identical architecture, the student outperforms the teacher. The authors managed to get CIFAR-100 test error down to ~15% using a DenseNet.