Deep Learning Weekly Issue #170
AI that detects asymptomatic Covid-19 infections, efficient compression on NNs, & 2021 edge computing predictions
|Matthew Moellman||Nov 4, 2020|
This week in deep learning we bring you this AI model that detects asymptomatic Covid-19, infections through cellphone-recorded coughs, this tutorial on simple keyword audio recognition, how to create a style transfer Snapchat lens with Fritz AI and SnapML in Lens Studio, and Robust Intelligence, a startup aimed at detecting adversarial attacks.
You may also enjoy these papers titled Training Generative Adversarial Networks by Solving Ordinary Differential Equations and Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers
Robust Intelligence is a new startup led by CEO Singer with a platform that the company says is trained to detect more than 100 types of adversarial attacks.
Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs
Results might provide a convenient screening tool for people who may not suspect they are infected.
Light’s ‘Clarity’ Depth Camera Could Be A Game Changer
“Light,” a former camera company has announced a new depth sensor that could be a game changer, upending LIDAR and computer vision based depth measurement, by producing a combination RGB image and depth map with ranges out to an astonishing 1000 meters.
How the U.S. patent office is keeping up with AI
The rapid rise of AI has forced the legal field to ask difficult questions about whether an AI can hold a patent at all, how existing IP and patent laws can address the unique challenges that AI presents, and what challenges remain.
NVIDIA A100 Launches on AWS
New A100-powered Amazon EC2 P4d instance available as NVIDIA GPUs reach 10 years on AWS.
Mobile + Edge
Creating a Style Transfer Snapchat Lens with Fritz AI and SnapML in Lens Studio
Leveraging Fritz AI’s no-code model building Studio to quickly prototype a style transfer Snapchat Lens.
5 edge computing predictions for 2021
Forrester says 2021 will be the year this emerging technology graduates from experiment to practically applicable technology, driven largely by AI and 5G.
Ambarella launches computer vision chips for edge AI
Chip designer Ambarella has announced a new computer vision chip for processing artificial intelligence at the edge of computer networks, like in smart cars and security cameras.
James Hurlbut – The Man Who Harnessed the Power of SnapML to Rank Surfboards
An interview between Snap and creative dev James Hurlbut, who combined his passion for surfing with his technical skills to develop an immersive AR project during the Snapchat Machine Learning Creative Residency Program this Summer.
Why Skin Lesions are Peanuts and Brain Tumors Harder Nuts
Why are some problems in medical image analysis harder than others for AI, and what can we do about them?
Experimenting with Automatic Video Creation from a Web Page
This post is about URL2Video, an experimental heuristics-based creativity tool that automatically converts a web page into a short video, leveraging existing assets to provide a jump-start on the video creation process.
Background Features in Google Meet, Powered by Web ML
This post explains how the in-browser background blur and replacement functionality works in Google Meet.
Simple audio recognition: Recognizing keywords
This tutorial will show you how to build a basic speech recognition network that recognizes ten different words.
Libraries & Code
Graph Neural Networks. There are GraphSAGE, GAT models. Other models will be added soon. Stay tuned!
[CVPR2020] GhostNet: More Features from Cheap Operations.
Papers & Publications
Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Abstract: The instability of Generative Adversarial Network (GAN) training has frequently been attributed to gradient descent. Consequently, recent methods have aimed to tailor the models and training procedures to stabilise the discrete updates. In contrast, we study the continuous-time dynamics induced by GAN training. Both theory and toy experiments suggest that these dynamics are in fact surprisingly stable. From this perspective, we hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics. We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training - when combined with a regulariser that controls the integration error. Our approach represents a radical departure from previous methods which typically use adaptive optimisation and stabilisation techniques that constrain the functional space (e.g. Spectral Normalisation). Evaluation on CIFAR-10 and ImageNet shows that our method outperforms several strong baselines, demonstrating its efficacy.
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks
Abstract: Compressing large neural networks is an important step for their deployment in resource-constrained computational platforms. In this context, vector quantization is an appealing framework that expresses multiple parameters using a single code, and has recently achieved state-of-the-art network compression on a range of core vision and natural language processing tasks. Key to the success of vector quantization is deciding which parameter groups should be compressed together. Previous work has relied on heuristics that group the spatial dimension of individual convolutional filters, but a general solution remains unaddressed. This is desirable for pointwise convolutions (which dominate modern architectures), linear layers (which have no notion of spatial dimension), and convolutions (when more than one filter is compressed to the same codeword). In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function. We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress. Finally, we rely on an annealed quantization algorithm to better compress the network and achieve higher final accuracy. We show results on image classification, object detection, and segmentation, reducing the gap with the uncompressed model by 40 to 70% with respect to the current state of the art.