Deep Learning Weekly

Share this post

Deep Learning Weekly Issue #136

www.deeplearningweekly.com

Deep Learning Weekly Issue #136

Acquisitions from Snap and Intel, data augmentation from Google, HuggingFace tokenizers, TF 2.1, and more...

Jameson Toole
Jan 15, 2020
1
Share this post

Deep Learning Weekly Issue #136

www.deeplearningweekly.com

Hey folks,

Happy new year! This decade in deep learning we bring you DeepFakes from TikTok, acquisitions by Snapchat and Intel, an AI board from Arduino, and new AI workflow tool from Lyft.

You may also include a summary of trends from NeurIPS 2019, transformers learn to play chess, AI applications to economic research from Amazon, a new augmentation technique from Google, a tokenizer library from HuggingFace, and more.

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly

Until next week!

Industry

Introducing Flyte: A Cloud Native Machine Learning and Data Processing Platform

Lyft open sources a model serving and workflow tool.

Snapchat quietly acquired AI Factory, the company behind its new Cameos feature, for $166M

AI-based creativity tools continue to be a massive value add for social networks increasingly relying on computer vision.

TikTok-owner ByteDance reportedly built a deepfake maker

The deepfake wars continue to heat up.

Samsung has made an invisible AI-powered keyboard for your phone

At CES, Samsung debued a computer vision-based keyboard that allows users to type on any flat surface.

Arduino goes PRO at CES 2020

Arduino announced a new $99 board designed for AI workloads.

Intel buys AI chipmaker Habana for $2 billion

With so many AI chip makers, consolidation is likely.

Datasets

Action Genome: Actions as Composition of Spatio-temporal Scene Graphs

A new dataset, representation, and model for decomposing actions in a video into structured graph data.

[Github] nicolas-gervais/predicting-car-price-from-scraped-data

64,000 pictures of cars, labeled by make, model, year, price, horsepower, body style, etc.

Learning

TensorFlow 2.1 released.

This release consolidates CPU and GPU flavors, brings TPU support to Keras, and will be the last major release supporting Python 2.

A very unlikely chess game

Transformers learn how to play chess better than amateurs.

Key trends from NeurIPS 2019

Chip Huyen summarizes some of the biggest trends at NeurIPS 2019.

Google AI chief Jeff Dean interview: Machine learning trends in 2020

Jeff Dean talks transformers, specialized hardware, and robots.

Amazon at AEA: The crossroads of economics and AI

Amazon applies deep learning to create additional data for economic models.

Libraries & Code

[Github] google-research/augmix

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

BazeFace available for TensorFlow.js

The model detects faces and facial features in real-time.

[Github] huggingface/tokenizers

HuggingFace introduces fast tokenizers to go with their language models.

Papers & Publications

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

Abstract: …. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.

Rendering Synthetic Objects into Legacy Photographs

Abstract: We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.

Share this post

Deep Learning Weekly Issue #136

www.deeplearningweekly.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Deep Learning Weekly
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing