Discover more from Deep Learning Weekly
Deep Learning Weekly: Issue #200
Fairness of AI systems, Code generation, Google I/O 2021, PyTorch Lightning Flash, Transformers for image classification, and more
This is Issue 200, what a milestone! Thank you for being a part of the DLW community and motivating us to curate this newsletter every week. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Fairlearn is an open-source, community-driven project to help data scientists improve fairness of AI systems. It contains many guides, tutorials and use-cases.
Vertex AI is meant to make it easier for developers to deploy and maintain their AI models.
Several European privacy and digital rights organizations announced that they’ve filed legal complaints against the controversial facial recognition company Clearview AI.
Microsoft is now using OpenAI’s massive GPT-3 natural language model in its low-code Power Apps service to translate spoken text into code.
In this video, AI Lead Laurence Moroney gives the top 10 AI and ML developer updates from this year’s Google I/O.
This startup helps organisations adopt AI into their business, and has already signed with at least seven UK government entities as well as with private companies like Redbull and Virgin Media.
Mobile & Edge
The 2021 Arm’s Total Compute solutions launch provides solutions for all consumer device markets, and is addressing the explosion of AI and ML use-cases across all consumer devices.
Tensorflow can now run on microcontrollers, and this Google I/O workshop explains how to make it, with demos and interesting use-cases.
Replacing digital with analog circuits and photonics can improve performance and power when running neural networks inference, but it’s not that simple.
DeepMind is releasing AndroidEnv, an open-source platform for Reinforcement Learning research built on top of Android OS, allowing agents to interact with a wide variety of apps and services.
This post presents Discrete Fourier Transforms, a heavily used method in signal processing, as a neural network.
This paper analyzes how the architecture of a neural network impacts its internal representations, by comparing models with the same architecture but different widths and depths.
Autodesk shows a five-fold increase in performance when running inferences of NLP models used in their customer support chatbot on Inferentia, AWS ML chip, compared to a GPU instance.
This search engine for ML papers is built by labml.ai and gives an interesting ranking of the papers.
Libraries & Code
Lightning Flash is a library from the creators of PyTorch Lightning to enable quick baselining and experimentation with state-of-the-art models for popular Deep Learning tasks.
The YFCC100M is the largest publicly and freely useable multimedia collection, containing the metadata of around 99.2 million photos and 0.8 million videos from Flickr.
This repository contains very well-documented demos made with the Transformers library by HuggingFace.
Papers & Publications
Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.
Deep Convolutional Neural Networks (CNNs) have long been the architecture of choice for computer vision tasks. Recently, Transformer-based architectures like Vision Transformer (ViT) have matched or even surpassed ResNets for image classification. However, details of the Transformer architecture -- such as the use of non-overlapping patches -- lead one to wonder whether these networks are as robust. In this paper, we perform an extensive study of a variety of different measures of robustness of ViT models and compare the findings to ResNet baselines. We investigate robustness to input perturbations as well as robustness to model perturbations. We find that when pre-trained with a sufficient amount of data, ViT models are at least as robust as the ResNet counterparts on a broad range of perturbations. We also find that Transformers are robust to the removal of almost any single layer, and that while activations from later layers are highly correlated with each other, they nevertheless play an important role in classification.
Text generation has become one of the most important yet challenging tasks in natural language processing (NLP). The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text. We further summarize several important fine-tuning strategies for text generation. Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.