

Discover more from Deep Learning Weekly
Deep Learning Weekly Issue #140
Microsofts 17B param model, inside OpenAI, binarized neural networks, a new Kaggle competition and more...
Hey folks,
This week in deep learning we bring you a very (very) large language model from Microsoft, a profile of OpenAI, a new video editing tool from Google, a round up of AI’s impact on cybersecurity, and a new Kaggle competition on abstraction and reasoning.
You may also enjoy computer vision recipes from Microsoft, an inference engine for binarized neural networks, a guide to training transformers from scratch, an implementation of a promising motion transfer model, a new framework for learning visual representations, and more.
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Turing-NLG: A 17-billion-parameter language model by Microsoft
Not wanting to be left out of the biggest model competition, Microsoft announces a 17B parameter model, twice as large as Nvidia’s MegatronLM.
The messy, secretive reality behind OpenAI’s bid to save the world
An in-depth profile of OpenAI and the tension between mission and business model.
AutoFlip: An Open Source Framework for Intelligent Video Reframing
A new video editing tool from Google analyzes frame content to determine the best cropping for different formats.
VentureBeat Special Issue on AI and Security
An insightful group of articles covering everything from DeepFakes to adversarial attacks.
Kaggle: Abstraction and Reasoning Challenge
A new Kaggle challenge on abstract reasoning with $20,000 in prizes.
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
Gary Marcus on the next decade of AI.
Mobile + Edge
How analog in-memory computing can solve power challenges of edge AI inference
Performing matrix operations in memory to speed computation and reduce power consumption in embedded systems.
Highly optimized inference engine for Binarized Neural Networks
Build a Touchless Swipe iOS App Using ML Kit’s Face Detection API
A tutorial on gesture control via face landmark recognition.
Processing Tweets Using Natural Language and Create ML on iOS
Leveraging native iOS libraries to perform tasks like tokenization, named entity recognition and sentiment analysis
Learning
Google Brain and DeepMind researchers attack reinforcement learning efficiency
Two papers out of Google Brain and DeepMind detail RL algorithms that decrease variance among agents and training agents in parallel.
How to train a new language model from scratch using Transformers and Tokenizers
A great tutorial from HuggingFace on training popular transformer architectures from scratch.
A web application making it easier to explore and save the output from Google’s musical composition transformer model.
How to build a brain from scratch
Lecture notes from Christopher Summerfield’s class at Oxford.
A nice writeup of getting started with JAX.
Libraries & Code
[Github] microsoft/computervision-recipes
Best Practices, code samples, and documentation for Computer Vision.
[Github] AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
Papers & Publications
A Simple Framework for Contrastive Learning of Visual Representations
Abstract: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
CRVOS: Clue Refining Network for Video Object Segmentation
Abstract: The encoder-decoder based methods for semi-supervised video object segmentation (Semi-VOS) have received extensive attentions due to their superior performances. However, most of them have complex intermediate networks which generate strong specifiers, to be robust against challenging scenarios, and this is quite inefficient when dealing with relatively simple scenarios. To solve this problem, we propose a real-time Clue Refining Network for Video Object Segmentation (CRVOS) which does not have complex intermediate network. In this work, we propose a simple specifier, referred to as the Clue, which consists of the previous frame's coarse mask and coordinates information. We also propose a novel refine module which shows higher performance than general ones by using deconvolution layer instead of bilinear upsampling. Our proposed network, CRVOS, is the fastest method with the competitive performance. On DAVIS16 validation set, CRVOS achieves 61 FPS and J&F score of 81.6%.