Deep Learning Weekly Issue #175
Google fires AI ethics team member, Massachusetts primed to ban facial recognition in policing, and more
|Matthew Moellman||Dec 9, 2020|
This week in deep learning we bring you Google's firing of Timnit Gebru, Massachusetts' pending ban on police use of facial recognition, and Qualcomm’s new Snapdragon 888: an AI and computer vision powerhouse.
You may also enjoy this tutorial on how to train and use an Autoencoder for feature extraction for regression, this tutorial on computing image similarity scores using siamese networks, Keras, and TensorFlow, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
In firing Timnit Gebru, Google puts commercial interests ahead of ethics
This week, leading AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices.
Neuroscientists find a way to make object-recognition models perform better
Adding a module that mimics part of the brain can prevent common errors made by computer vision models.
This Company Uses AI to Outwit Malicious AI
Robust Intelligence is among a crop of companies that offer to protect clients from efforts at deception.
Massachusetts on the verge of becoming first US state to ban police use of facial recognition
The bill still requires signing by the state’s governor.
Tecton.ai nabs $35M Series B as it releases machine learning feature store
Tecton.ai, the startup founded by three former Uber engineers who wanted to bring the machine learning feature store idea to the masses, announced a $35 million Series B, just seven months after announcing their $20 million Series A.
Mobile + Edge
Qualcomm’s Snapdragon 888 is an AI and computer vision powerhouse
Qualcomm is making clear that the next generation of Android devices will rely heavily on advanced AI and computer vision processors to retake the performance lead.
On-Device Face Detection on Android using Google’s ML Kit
Detecting faces in an image with the power of mobile machine learning.
Amazon SageMaker Edge Manager Simplifies Operating Machine Learning Models on Edge Devices
Amazon SageMaker Edge Manager is a new capability of Amazon SageMaker that makes it easier to optimize, secure, monitor, and maintain machine learning models on a fleet of edge devices.
AI Programming for IoT, TinyML for IoT
Upgrading microcontrollers with small, essentially self-contained neural networks enables organizations to deploy efficient AI capabilities for IoT without waiting for specialized AI chips.
Autoencoder Feature Extraction for Regression
This tutorial covers autoencoders: what they are, how to train them, and how to use them for feature extraction.
Wav2vec 2.0: Learning the structure of speech from raw audio
Facebook AI released code and models for wav2vec 2.0, a self-supervised algorithm that enables automatic speech recognition models with just 10 minutes of transcribed speech data.
Comparing images for similarity using siamese networks, Keras, and TensorFlow
In this tutorial, you will learn how to compare two images for similarity using siamese networks and the Keras/TensorFlow deep learning libraries.
Upgrade Your DNN Training with Amazon SageMaker Debugger
How to increase your efficiency and reduce cost when training in the cloud.
Libraries & Code
Network-to-Network Translation with Conditional Invertible Neural Networks.
Caer simplifies your approach towards Computer Vision by abstracting away unnecessary boilerplate code enabling maximum flexibility.
A face database with a large number of high-quality attribute annotations.
Papers & Publications
Multi-Scale 2D Temporal Adjacent Networks for Moment Localization with Natural Language
Abstract: We address the problem of retrieving a specific moment from an untrimmed video by natural language. It is a challenging problem because a target moment may take place in the context of other temporal moments in the untrimmed video. Existing methods cannot tackle this challenge well since they do not fully consider the temporal contexts between temporal moments. In this paper, we model the temporal context between video moments by a set of predefined two-dimensional maps under different temporal scales. For each map, one dimension indicates the starting time of a moment and the other indicates the duration. These 2D temporal maps can cover diverse video moments with different lengths, while representing their adjacent contexts at different temporal scales. Based on the 2D temporal maps, we propose a Multi-Scale Temporal Adjacent Network (MS-2D-TAN), a single-shot framework for moment localization. It is capable of encoding the adjacent temporal contexts at each scale, while learning discriminative features for matching video moments with referring expressions. We evaluate the proposed MS-2D-TAN on three challenging benchmarks, i.e., Charades-STA, ActivityNet Captions, and TACoS, where our MS-2D-TAN outperforms the state of the art.
A Note on Data Biases in Generative Models
Abstract: It is tempting to think that machines are less prone to unfairness and prejudice. However, machine learning approaches compute their outputs based on data. While biases can enter at any stage of the development pipeline, models are particularly receptive to mirror biases of the datasets they are trained on and therefore do not necessarily reflect truths about the world but, primarily, truths about the data. To raise awareness about the relationship between modern algorithms and the data that shape them, we use a conditional invertible neural network to disentangle the dataset-specific information from the information which is shared across different datasets. In this way, we can project the same image onto different datasets, thereby revealing their inherent biases. We use this methodology to (i) investigate the impact of dataset quality on the performance of generative models, (ii) show how societal biases of datasets are replicated by generative models, and (iii) present creative applications through unpaired transfer between diverse datasets such as photographs, oil portraits, and animes. Our code and an interactive demonstration are available at this https URL.