Deep Learning Weekly: Issue #200

Fairness of AI systems, Code generation, Google I/O 2021, PyTorch Lightning Flash, Transformers for image classification, and more

Hey folks,

This week in deep learning, we bring you the Fairlearn project, Google I/O 2021, Arm’s edge computing solutions, NLP inference using AWS chip, and a paper on the future of AI.

You may also enjoy Tensorflow for microcontrollers, GPT-3-powered code generation, PyTorch Lightning Flash new release, a survey on pretrained language models for text generation, and more!

This is Issue 200, what a milestone! Thank you for being a part of the DLW community and motivating us to curate this newsletter every week. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!


Industry

Improve fairness of AI systems

Fairlearn is an open-source, community-driven project to help data scientists improve fairness of AI systems. It contains many guides, tutorials and use-cases.

Google Cloud launches Vertex AI, a new managed machine learning platform

Vertex AI is meant to make it easier for developers to deploy and maintain their AI models.

Clearview AI hit with sweeping legal complaints over controversial face scraping in Europe

Several European privacy and digital rights organizations announced that they’ve filed legal complaints against the controversial facial recognition company Clearview AI.

Microsoft uses GPT-3 to let you code in natural language

Microsoft is now using OpenAI’s massive GPT-3 natural language model in its low-code Power Apps service to translate spoken text into code.

Top 10 AI and ML developer updates from Google I/O 2021

In this video, AI Lead Laurence Moroney gives the top 10 AI and ML developer updates from this year’s Google I/O.

British AI startup Faculty raises $42.5M growth round led by Apax Digital Fund

This startup helps organisations adopt AI into their business, and has already signed with at least seven UK government entities as well as with private companies like Redbull and Virgin Media.

Mobile & Edge

How Arm’s Total Compute solutions will power the next decade of compute

The 2021 Arm’s Total Compute solutions launch provides solutions for all consumer device markets, and is addressing the explosion of AI and ML use-cases across all consumer devices.

Building with TensorFlow Lite for microcontrollers

Tensorflow can now run on microcontrollers, and this Google I/O workshop explains how to make it, with demos and interesting use-cases.

Developers Turn To Analog For Neural Nets

Replacing digital with analog circuits and photonics can improve performance and power when running neural networks inference, but it’s not that simple.

AndroidEnv: The Android Learning Environment

DeepMind is releasing AndroidEnv, an open-source platform for Reinforcement Learning research built on top of Android OS, allowing agents to interact with a wide variety of apps and services.

Learning

The Fourier transform is a neural network

This post presents Discrete Fourier Transforms, a heavily used method in signal processing, as a neural network.

Do Wide and Deep Networks Learn the Same Things?

This paper analyzes how the architecture of a neural network impacts its internal representations, by comparing models with the same architecture but different widths and depths.

How We Used AWS Inferentia to Boost PyTorch NLP Model Performance by 4.9x for the Autodesk Ava Chatbot

Autodesk shows a five-fold increase in performance when running inferences of NLP models used in their customer support chatbot on Inferentia, AWS ML chip, compared to a GPU instance.

ML Papers Search Engine

This search engine for ML papers is built by labml.ai and gives an interesting ranking of the papers.

Libraries & Code

Lightning Flash 0.3 — New Tasks, Visualization Tools, Data Pipeline, and Flash Registry API

Lightning Flash is a library from the creators of PyTorch Lightning to enable quick baselining and experimentation with state-of-the-art models for popular Deep Learning tasks.

YFCC100M Core Dataset

The YFCC100M is the largest publicly and freely useable multimedia collection, containing  the metadata of around 99.2 million photos and 0.8 million videos from Flickr.

Transformers-Tutorials

This repository contains very well-documented demos made with the Transformers library by HuggingFace.

Papers & Publications

Why AI is Harder Than We Think

Abstract:

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

Understanding Robustness of Transformers for Image Classification

Abstract:

Deep Convolutional Neural Networks (CNNs) have long been the architecture of choice for computer vision tasks. Recently, Transformer-based architectures like Vision Transformer (ViT) have matched or even surpassed ResNets for image classification. However, details of the Transformer architecture -- such as the use of non-overlapping patches -- lead one to wonder whether these networks are as robust. In this paper, we perform an extensive study of a variety of different measures of robustness of ViT models and compare the findings to ResNet baselines. We investigate robustness to input perturbations as well as robustness to model perturbations. We find that when pre-trained with a sufficient amount of data, ViT models are at least as robust as the ResNet counterparts on a broad range of perturbations. We also find that Transformers are robust to the removal of almost any single layer, and that while activations from later layers are highly correlated with each other, they nevertheless play an important role in classification.

Pretrained Language Models for Text Generation: A Survey

Abstract:

Text generation has become one of the most important yet challenging tasks in natural language processing (NLP). The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text. We further summarize several important fine-tuning strategies for text generation. Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.