Deep Learning Weekly

Share this post

Deep Learning Weekly Issue #119

www.deeplearningweekly.com

Deep Learning Weekly Issue #119

AI-generated perfume, a new question answering challenge, Lyft's Level 5 dataset, model compression, and more!

Jameson Toole
Jul 31, 2019
Share this post

Deep Learning Weekly Issue #119

www.deeplearningweekly.com

Hey folks,

This week in deep learning we bring you an AI that creates perfumes, TensorFlow Addons, a new long-form question answering challenge from Facebook, and a GAN that can generate Twitch.tv emotes.

You may also enjoy a new model compression technique from FAIR, a review of semantic segmentation models, Lyft’s Level 5 self-driving car dataset, a Keras implementation of BiGAN, and a method for improving shift invariance in CNNs.

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

This AI detects 11 types of emotions from a selfie

Researchers at the University of Colorado and Duke train a convolutional neural network to predict emotions from selfies.

Artificial Intelligence Can Now Create Perfumes, Even Without A Sense Of Smell

A machine learning model takes chemical properties and demographic sales data to generate new perfume formulas.

Introducing TensorFlow Addons

TensorFlow Addons will replace tf.contrib, making it easier for the open-source community to extend TensorFlow functionality beyond core APIs.

Introducing long-form question answering

Facebook announces a new challenge for long-form question answering including code, data, and baseline models.

Learning

ThisEmoteDoesNotExist: Training a GAN for Twitch Emotes

Downloading over 2 million emotes from Twitch and training a GAN to produce new ones.

Compressing neural networks for image classification and detection

A new compression technique from Facebook shrinks the size of Mask-RCNN 26x while maintaining reasonable accuracy.

Convnet Playground by Fast Forward Labs

A tool for the interactive exploration of Convolutional Neural Networks.

Neural Point-Based Graphics

Using RGB-D images and neural networks to render new views of complex scenes.

A 2019 guide to semantic segmentation

A review of the popular and state-of-the-art semantic segmentation models.

Datasets

TACO: Trash Annotations in Context

An open image dataset of waste in the wild.

Lyft: Level 5 dataset

Lyft releases a massive dataset from its self-driving car fleet including LiDAR and cameras.

Libraries & Code

[Github] nianticlabs/monodepth2

A reference implementation of a monocular depth model by Niantic Labs.

[Github] manicman1999/Keras-BiGAN

A Keras implementation of BiGAN to find similar images.

[Github] Google MixNet models

A new set of efficient models produced by Google’s mobile neural architecture search.

Papers & Publications

Making Convolutional Networks Shift-Invariant Again

Abstract: Modern convolutional networks are not shift-invariant, as small input shifts or translations can cause drastic changes in the output. Commonly used downsampling methods, such as max-pooling, strided-convolution, and average-pooling, ignore the sampling theorem. The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling….We show that when integrated correctly, it is compatible with existing architectural components, such as max-pooling and strided-convolution. We observe increased accuracy in ImageNet classification, across several commonly-used architectures, such as ResNet, DenseNet, and MobileNet, indicating effective regularization. Furthermore, we observe better generalization, in terms of stability and robustness to input corruptions.

Green AI

Abstract: The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. These computations have a surprisingly large carbon footprint....This position paper advocates a practical solution by making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or "price tag" of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods. Our goal is to make AI both greener and more inclusive---enabling any inspired undergraduate with a laptop to write high-quality research papers…

Share this post

Deep Learning Weekly Issue #119

www.deeplearningweekly.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Deep Learning Weekly
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing