Deep Learning Weekly Issue #140
Microsofts 17B param model, inside OpenAI, binarized neural networks, a new Kaggle competition and more...
|Jameson Toole||Feb 19, 2020||1|
This week in deep learning we bring you a very (very) large language model from Microsoft, a profile of OpenAI, a new video editing tool from Google, a round up of AI’s impact on cybersecurity, and a new Kaggle competition on abstraction and reasoning.
You may also enjoy computer vision recipes from Microsoft, an inference engine for binarized neural networks, a guide to training transformers from scratch, an implementation of a promising motion transfer model, a new framework for learning visual representations, and more.
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Not wanting to be left out of the biggest model competition, Microsoft announces a 17B parameter model, twice as large as Nvidia’s MegatronLM.
An in-depth profile of OpenAI and the tension between mission and business model.
A new video editing tool from Google analyzes frame content to determine the best cropping for different formats.
An insightful group of articles covering everything from DeepFakes to adversarial attacks.
A new Kaggle challenge on abstract reasoning with $20,000 in prizes.
Gary Marcus on the next decade of AI.
Mobile + Edge
Performing matrix operations in memory to speed computation and reduce power consumption in embedded systems.
Highly optimized inference engine for Binarized Neural Networks
A tutorial on gesture control via face landmark recognition.
Leveraging native iOS libraries to perform tasks like tokenization, named entity recognition and sentiment analysis
Two papers out of Google Brain and DeepMind detail RL algorithms that decrease variance among agents and training agents in parallel.
A great tutorial from HuggingFace on training popular transformer architectures from scratch.
A web application making it easier to explore and save the output from Google’s musical composition transformer model.
Lecture notes from Christopher Summerfield’s class at Oxford.
A nice writeup of getting started with JAX.
Libraries & Code
Best Practices, code samples, and documentation for Computer Vision.
This repository contains the source code for the paper First Order Motion Model for Image Animation
Papers & Publications
Abstract: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Abstract: The encoder-decoder based methods for semi-supervised video object segmentation (Semi-VOS) have received extensive attentions due to their superior performances. However, most of them have complex intermediate networks which generate strong specifiers, to be robust against challenging scenarios, and this is quite inefficient when dealing with relatively simple scenarios. To solve this problem, we propose a real-time Clue Refining Network for Video Object Segmentation (CRVOS) which does not have complex intermediate network. In this work, we propose a simple specifier, referred to as the Clue, which consists of the previous frame's coarse mask and coordinates information. We also propose a novel refine module which shows higher performance than general ones by using deconvolution layer instead of bilinear upsampling. Our proposed network, CRVOS, is the fastest method with the competitive performance. On DAVIS16 validation set, CRVOS achieves 61 FPS and J&F score of 81.6%.