Deep Learning Weekly: Issue #278
OpenAI's Minecraft-playing bot, Binance's real-time machine learning pipeline, fundamental concepts of diffusion and classifier guidance, and more
Hey Folks,
This week in deep learning, we bring you OpenAI's Minecraft-playing bot using imitation learning and VPT, Binance's real-time end-to-end machine learning pipeline, fundamental concepts of diffusion and classifier guidance, and a paper on discovering faster matrix multiplication algorithms with reinforcement learning.
You may also enjoy Holistic Evaluation of Language Models (HELM), a scikit learn compatible neural network library called skorch, efficient multi-objective neural architecture search with Ax, a paper on exemplar-based image editing with diffusion models, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
A bot that watched 70,000 hours of Minecraft videos could unlock AI’s next big thing
OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game, and using Video Pre-Training (VPT).
Language Models are Changing AI: The Need for Holistic Evaluation
A new benchmarking approach that considers the full range of societal considerations, Holistic Evaluation of Language Models (HELM), was recently developed at the Center of Research for Foundation Models.
MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps
Medical-imaging leaders, including UCSF, Cincinnati Children’s Hospital and startup Qure.ai, are adopting MONAI Deploy to turn research breakthroughs into clinical impact.
Sleep Can Keep AI From Catastrophic Forgetting
In a new study, researchers analyzed the mechanisms behind catastrophic forgetting and the role of sleep in preventing it. Instead of using CNNs, they used a “spiking neural network” that more closely mimics the human brain.
Transcription-as-a-service startup Deepgram lands $47M in funding
Deepgram, a Transcription-as-a-Service startup that has a multi-language voice recognition engine, announced that it has raised an additional $47 million to complete a Series B.
MLOps
A Colab tutorial that covers how to use Great Expectations, a tool that aids you in keeping an eye on your data quality.
Using MLOps to Build a Real-time End-to-End Machine Learning Pipeline
Binance describes the design and implementation of the real-time end-to-end machine learning pipeline that they built.
In this article, we’ll consider how Flyte enables orchestrating ML pipelines with infrastructure abstraction.
Learning
How Spotify Uses Semantic Search for Podcasts
A deep dive into, and small-scale reimplementation of, Spotify’s semantic search for podcasts.
How to Build a Text Classification Model Using HuggingFace Transformers and Comet
This article will show you how to build your text classification model using transformers (which includes a state-of-the-art pre-trained model) and how to utilize Comet to keep track of your model’s experiments.
Text-to-Image: Diffusion, Text Conditioning, Guidance, Latent Space
Eugene Yan breaks down the fundamental concepts and papers on diffusion, text conditioning, guidance, and latent spaces.
Efficient Multi-Objective Neural Architecture Search with Ax
In our tutorial we show how to use Ax to run multi-objective NAS for a simple neural network model on the popular MNIST dataset.
Building a TensorFlow Lite based computer vision emoji input device with OpenMV
A guide that covers using TinyML on an Arm Cortex-M based device to create a dedicated input device for converting gestures into emojis.
Libraries & Code
jameslyons/python_speech_features
A library that provides common speech features for ASR including MFCCs and filterbank energies.
A scikit-learn compatible neural network library that wraps PyTorch.
Build, train, and fine-tune production-ready deep learning SOTA vision models.
Papers & Publications
Discovering faster matrix multiplication algorithms with reinforcement learning
Abstract:
Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientific computing routines. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero1 for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago2. We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor’s ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria
Paint by Example: Exemplar-based Image Editing with Diffusion Models
Abstract:
Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.
TorchScale: Transformers at Scale
Abstract:
Large Transformers have achieved state-of-the-art performance across many tasks. Most open-source libraries on scaling Transformers focus on improving training or inference with better parallelization. In this work, we present TorchScale, an open-source toolkit that allows researchers and developers to scale up Transformers efficiently and effectively. TorchScale has the implementation of several modeling techniques, which can improve modeling generality and capability, as well as training stability and efficiency. Experimental results on language modeling and neural machine translation demonstrate that TorchScale can successfully scale Transformers to different sizes without tears.