Deep Learning Weekly Issue #125
Facebook's deepfake challenge, a new chip from Apple, TensorFlow.js for react-native, NeurIPS papers and more
This week in deep learning, we bring you a deepfake detection challenge from Facebook, TensorFlow.js for react-native, an AI IoT development kit from Microsoft, and the list of papers accepted at NeurIPS 2019.
You may also enjoy a course on deep Bayesian methods, a new dataset of preference elicitation, an implementation of rotated Mask R-CNN, a repository with NLP models implemented in TensorFlow, a tutorial on attention-based OCR models, and more.
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Facebook launches a $10 million prize pool for research detecting deepfakes.
There is now another way to run TensorFlow on mobile apps.
Train models using Azure Custom Vision service and run inference directly on an IoT camera with a Qualcomm Snapdragon processor.
When tech companies poach professors, grant funding to universities goes down and student-founded startups drop.
Machine learning featured heavily in the iPhone Pro announcement, mostly related to computer vision.
Based on titles alone, GANs, optimizers, and Bayesian approaches feature heavily.
Videos, slides, and assignments for a course on deep bayesian networks.
A call for collaborators from an ambitious new project to create a speech toolkit in PyTorch.
A great tutorial on attention-based OCR models.
Google announces a new dataset of conversations where an assistant elicits movie preferences from a user.
Libraries & Code
Play games without touching they keyboard using a TensorFlow-based gesture recognizer.
An implementation of Mask R-CNN with improved performance on rotated objects.
An extensive list of NLP models implemented in TensorFlow.
Papers & Publications
Abstract: ….In this work, we introduce Once for All (OFA) for efficient neural network design to handle many deployment scenarios, a new methodology that decouples model training from architecture search. Instead of training a specialized model for each case, we propose to train a once-for-all network that supports diverse architectural settings (depth, width, kernel size, and resolution). Given a deployment scenario, we can later search a specialized sub-network by selecting from the once-for-all network without training. As such, the training cost of specialized models is reduced from O(N) to O(1). However, it's challenging to prevent interference between many sub-networks. Therefore we propose the progressive shrinking algorithm, which is capable of training a once-for-all network to support more than 1019 sub-networks while maintaining the same accuracy as independently trained networks, saving the non-recurring engineering (NRE) cost….
Abstract: We propose a novel architecture which is able to automatically anonymize faces in images while retaining the original data distribution. We ensure total anonymization of all faces in an image by generating images exclusively on privacy-safe information. Our model is based on a conditional generative adversarial network, generating images considering the original pose and image background. The conditional information enables us to generate highly realistic faces with a seamless transition between the generated face and the existing background. Furthermore, we introduce a diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds. Finally, we present experimental results reflecting the capability of our model to anonymize images while preserving the data distribution, making the data suitable for further training of deep learning models. As far as we know, no other solution has been proposed that guarantees the anonymization of faces while generating realistic images.