Deep Learning Weekly - Issue #120: Scale becomes a unicorn, an AI learns to knit, an object counting API, listening to models train and more...

Bringing you everything new and exciting in the world of
 deep learning from academia to the grubby depth
 of industry every week right to your inbox. Free.

Issue 120

Hey folks,

This week in deep learning we bring you news of a new AI unicorn, a model to detect kidney injury 48 hours in advance, an AI that can knit, and an update to ERNIE, an NLP model from Baidu.

You may also enjoy an overview of StyleGANs, a call for comments on the TensorFlow model garden redesign, a project that turns neural network training into sound, a pose estimation model and app for Android, and an open source framework for counting objects in images and video.

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

Scale raises $100M Series C, is now a unicorn.

The data labeling and curation startup has seen impressive growth since its founding in 2016.

 

Using AI to give doctors a 48-hour head start on life-threatening illness

DeepMind trains a model on electronic medical records to predict acute kidney injury 48 hours before it occurs.

 

Google inches towards charging for Colab Notebooks

At least one user has received a pop-up suggesting Google is going to test monetization for Colab notebooks in the near future.

 

RFC: TensorFlow Official Model Garden Redesign

The TensorFlow team has released a number of requests for comments on everything from on-device training to the model garden redesign.

Learning

MIT CSAIL tackles knitting with twin AI tools

Researchers at MIT train a deep learning model to transform 2D knitting instructions into machine-readable actions.

 

Track human poses in real-time on Android with TensorFlow Lite

Google’s PoseNet model gets a corresponding app along with a parameter update to improve accuracy.

 

Listening to the neural network gradient norms during training

Using loss values and gradients to generate sound waves so that you can listen to models as they train.

 

Open Questions about Generative Adversarial Networks

What we’d like to find out about GANs that we don’t know yet.

 

StyleGAN: Use machine learning to generate and customize realistic images

A nice overview of StyleGANs.

Libraries & Code

[Github] ahmetozlu/tensorflow_object_counting_api

The TensorFlow Object Counting API is an open source framework built on top of TensorFlow and Keras that makes it easy to develop object counting systems.

 

Weight-agnostic Neural Networks

The code behind the recent weight-agnostic neural networks has been released.

 

[Github] AlexEMG/DeepLabCut

Markerless pose estimation of user-defined features with deep learning for all animals, including humans.

Papers & Publications

U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation

Abstract: We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based methods which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets….

 

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

Abstract: ...In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese.

For more deep learning news, tutorials, code, and discussion, join us on SlackTwitter, and GitHub.