Deep Learning Weekly - Issue #124: Structured learning and GANs in TF, another viral face-swapper, optimizer benchmarks, and more...

Bringing you everything new and exciting in the world of
 deep learning from academia to the grubby depth
 of industry every week right to your inbox. Free.

Issue 124

Hey folks,

This week in deep learning we bring you a GAN library for TensorFlow 2.0, another viral face-swapping app, an AI Mahjong player from Microsoft, and surprising results showing random architecture search beating neural architecture search.

You may also enjoy an interview with Yann LeCun on the AI Podcast, a primer on MLIR from Google, a few-shot face-swapping GAN, benchmarks for recent optimizers, a structured learning framework for TensorFlow, and more!

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0

The TensorFlow team has released some handy code for training GANs with TF 2.0.

 

Introducing Neural Structured Learning in TensorFlow

A new TensorFlow framework for incorporated structured data into training.

 

Another convincing deepfake app goes viral prompting immediate privacy backlash [The Verge]

Zao, a new deepfake face swapping app, hit Chinese app stores last week and found itself on a similar viral trajectory to FaceApp.

 

After 5,000 games, Microsoft’s Suphx AI can defeat top Mahjong players [VentureBeat]

Microsoft takes on Mahjong players and wins. Another game “defeated” by AI.

Learning

Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning

Lex Fridman interviews Facebook’s Yann LeCun on the AI Podcast.

 

Exploring Weight Agnostic Neural Networks

A deeper dive into neural networks without trained weights (sorta).

 

Tutorial on Graph Neural Networks for Computer Vision and Beyond

A three part tutorial covering graph neural network approaches to CV.

 

MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law

A great read from Google on intermediate representations and why they'll be increasingly important for deep learning.

 

A 2019 Guide to Speech Synthesis with Deep Learning

A summary of popular speech synthesis methods.

Libraries & Code

[GitHub] shaoanlu/fewshot-face-translation-GAN

Generative adversarial networks integrating modules from FUNIT and SPADE for few-shot face-swapping.

 

[GitHub] mgrankin/over9000

A helpful collection of benchmarks for new optimizers (spoiler: RangerLars wins).

 

Open-sourcing hyperparameter autotuning for fastText

Facebook open sources a tool to automatically tune their fastText classifier.

 

TensorFlow 2.0 Release Candidate

Change notes for the latest TF 2.0 release candidate.

Papers & Publications

Evaluating the Search Phase of Neural Architecture Search

Abstract: Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently evaluated solely by comparing their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we present a NAS evaluation framework that includes the search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the random policy outperforms state-of-the-art NAS algorithms; (ii) The results and candidate rankings of NAS algorithms do not reflect the true performance of the candidate architectures; and (iii) The widely used weight sharing strategy negatively impacts the training of good architectures, thus reducing the effectiveness of the search process. We believe that following our evaluation framework will be key to designing NAS strategies that truly discover superior architectures.

For more deep learning news, tutorials, code, and discussion, join us on SlackTwitter, and GitHub.