Deep Learning Weekly Issue #168

The state of mobile machine learning, Photoshop's new AI filters, Snapchat and the iPhone 12's LiDAR sensor, & more

Hey folks,

This week in deep learning we bring you the state of mobile machine learning in 2020, Photoshop’s AI neural filters that can tweak age and expression with a few clicks, Microsoft’s new image-captioning AI, and Snapchat's use of iPhone 12 Pro's LiDAR Scanner for AR.

You may also enjoy this paper on the AdaBelief Optimizer, Adobe's tool to help media creators prove their images are not deepfakes, this tutorial on adversarial images and attacks with Keras and TensorFlow, and more!

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

MIT Researcher Neil Thompson on Deep Learning’s Insatiable Compute Demands and Possible Solutions

“The Computational Limits of Deep Learning” first author Neil Thompson of MIT says DL’s economic and environmental footprints are growing worrying fast.

Photoshop’s AI neural filters can tweak age and expression with a few clicks

Adobe wants to make a big splash with its new machine learning tools.

Microsoft’s new image-captioning AI will help accessibility in Word, Outlook, and beyond

The algorithm even beats humans in some limited tasks.

Adobe Unveils Authentication Tool in Battle Against Deepfakes

Adobe Inc. debuted a software tool to help media creators prove their images are real, the latest move by the maker of Photoshop to combat the spread of deepfake technology.

A global collaboration to move artificial intelligence principles to practice

Convened by the MIT Schwarzman College of Computing, the AI Policy Forum will develop frameworks and tools for governments and companies to implement concrete policies.

Mobile + Edge

State of Mobile Machine Learning in 2020

Referencing survey data from 500 technical leaders across industries, this report dives headfirst into a burgeoning sector of the larger AI and machine learning (ML) industry—mobile machine learning.

Snapchat among first to leverage iPhone 12 Pro's LiDAR Scanner for AR

Snapchat confirms it will be among the first to put the new technology to use in its iOS app for lidar-powered Lenses.

Visual ways to search and understand our world

As part of the SearchOn event, Google announced new ways you can use Google Lens and augmented reality (AR) while learning and shopping.

Apple ‘Hi, Speed’ Event: 5G, A14 Bionic Chip, and LiDAR for New iPhones

Apple announced that its new iPhone series will also use the company’s newest A14 Bionic chipset.

Learning

Adversarial images and attacks with Keras and TensorFlow

In this tutorial, you will learn how to break deep learning models using image-based adversarial attacks using the Keras and TensorFlow.

A radical new technique lets AI learn with practically no data

“Less than one”-shot learning can teach a model to identify more objects than the number of examples it is trained on.

Recreating Historical Streetscapes Using Deep Learning and Crowdsourcing

Go back in time with rǝ, an open source suite of tools that leverages deep learning to enable developers, map enthusiasts, and creatives to generate historical reconstructions of cities in 3D using crowdsourced historical maps and photos.

Measuring Gendered Correlations in Pre-trained NLP Models

This is a case study from Google AI examining how gender correlations in pre-trained NLP models can affect downstream tasks and present a series of best practices to address unintended correlations in such pre-trained models

Libraries & Code

[GitHub] juntang-zhuang/Adabelief-Optimizer

Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients.”

[GitHub] AdamCobb/hamiltorch

PyTorch-based library for Riemannian Manifold Hamiltonian Monte Carlo (RMHMC) and inference in Bayesian neural networks.

Papers & Publications

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

Abstract: Most popular optimizers for deep learning can be broadly categorized as adaptive methods (e.g. Adam) and accelerated schemes (e.g. stochastic gradient descent (SGD) with momentum). For many models such as convolutional neural networks (CNNs), adaptive methods typically converge faster but generalize worse compared to SGD; for complex settings such as generative adversarial networks (GANs), adaptive methods are typically the default because of their stability.We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the "belief" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer. Code is available at this https URL.

Fairness in Streaming Submodular Maximization: Algorithms and Hardness

Abstract: Submodular maximization has become established as the method of choice for the task of selecting representative and diverse summaries of data. However, if datapoints have sensitive attributes such as gender or age, such machine learning algorithms, left unchecked, are known to exhibit bias: under- or over-representation of particular groups. This has made the design of fair machine learning algorithms increasingly important. In this work we address the question: Is it possible to create fair summaries for massive datasets? To this end, we develop the first streaming approximation algorithms for submodular maximization under fairness constraints, for both monotone and non-monotone functions. We validate our findings empirically on exemplar-based clustering, movie recommendation, DPP-based summarization, and maximum coverage in social networks, showing that fairness constraints do not significantly impact utility.