Deep Learning Weekly Issue #161
Google AI's flood warnings, an interview on the future of TinyML, style transfer on iOS, and more
Hey folks,
This week in deep learning we bring you effective testing for machine learning systems, the types of datasets Fortune 50 companies are buying, and Google’s AI flood warnings that cover all of India and have expanded to Bangladesh.
You may also enjoy reading about how to make your own tiny autonomous vehicle, what exactly is being transferred in transfer learning, Daniel Situnayake’s perspective on all things TinyML and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Google Offers to Help Others With the Tricky Ethics of AI
After learning its own ethics lessons the hard way, the tech giant will offer services like spotting racial bias or developing guidelines around AI projects.
AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms
A recent study found evidence Facebook’s ad platform may discriminate against certain demographic groups. The team of coauthors from Carnegie Mellon University say the biases exacerbate socioeconomic inequalities, an insight applicable to a broad swath of algorithmic decision-making.
AR, VR, Autonomy, Automation, Healthcare: What's Hot In AI Right No
This article discusses the most requested types of datasets according to Samasource, a company that creates training data for a quarter of the Fortune 50.
Deepfake reality check: AI avatars set to transform business and education outreac
AI avatars and synthetic video production could provide organizations with entirely new capabilities for training and multilingual global communication in the years ahead.
Google’s AI flood warnings now cover all of India and have expanded to Banglades
Flood warnings are now available to 240 million people.
Mobile + Edge
Autonomous embedded driving using computer vision
In this post, Edge Impulse shows you how you can create your own autonomously driving vehicle using their platform and the OpenMV platform with a model using only 121 KB of RAM.
The Future of Machine Learning: An Interview with Daniel Situnayake
Check out this interview for Daniel Situnayake’s perspective on all things TinyML.
Announcing TensorFlow Lite Micro support on the ESP3
The ESP32 is a Wi-Fi/BT/BLE enabled MCU (micro-controller) that is widely used by hobbyists and also commonly deployed in smart home appliances.
Train and Run a Create ML Style Transfer Model in an iOS Camera Application
At WWDC 2020, Apple introduced a bunch of new features in Create ML, it’s model-building framework, including the ability to train artistic style transfer models ready to deploy to iOS. Here, Anupam Chugh shows us how to start building with style transfer in Create ML.
Learning
Effective testing for machine learning systems
In this blog post, Jeremy Jordan covers what testing looks like for traditional software development, why testing machine learning systems can be different, and discuss some strategies for writing effective tests for machine learning systems.
Axial-DeepLab: Long-Range Modeling in All Layers for Panoptic Segmentation
In the ECCV 2020 paper, “Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation”, researchers at Google propose to adopt axial-attention, which recovers large receptive fields in fully attentional models.
Introducing TF-Coder, a tool that writes tricky TensorFlow expressions for you!
TF-Coder is a program synthesis tool that helps you write TensorFlow code. First, the tool asks for an input-output example of the desired tensor transformation. Then, it runs a combinatorial search to find TensorFlow expressions that perform that transformation.
Multi-Label Classification with Deep Learning
This tutorial covers multi-label classification, which unlike normal classification tasks where class labels are mutually exclusive, multi-label classification requires specialized machine learning algorithms that support predicting multiple mutually non-exclusive classes.
Libraries & Code
[GitHub] VITA-Group/GAN-Slimming
An all-in-one GAN compression method integrating model distillation, channel pruning and quantization under GAN minimax optimization framework.
[GitHub] opensource9ja/danfojs
danfo.js is an open source, JavaScript library providing high performance, intuitive, and easy to use data structures for manipulating and processing structured data.
Papers & Publications
What is being transferred in transfer learning?
One desired capability for machines is the ability to transfer their knowledge of one domain to another where data is (usually) scarce. Despite ample adaptation of transfer learning in various deep learning applications, we yet do not understand what enables a successful transfer and which part of the network is responsible for that. In this paper, we provide new tools and analyses to address these fundamental questions. Through a series of analyses on transferring to block-shuffled images, we separate the effect of feature reuse from learning low-level statistics of data and show that some benefit of transfer learning comes from the latter. We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices. Similar to other deep models, state-of-the-art GANs suffer from high parameter complexities. That has recently motivated the exploration of compressing GANs (usually generators). Compared to the vast literature and prevailing success in compressing deep classifiers, the study of GAN compression remains in its infancy, so far leveraging individual compression techniques instead of more sophisticated combinations. We observe that due to the notorious instability of training GANs, heuristically stacking different compression techniques will result in unsatisfactory results. To this end, we propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming (GS). GS seamlessly integrates three mainstream compression techniques: model distillation, channel pruning and quantization, together with the GAN minimax objective, into one unified optimization form, that can be efficiently optimized from end to end. Without bells and whistles, GS largely outperforms existing options in compressing image-to-image translation GANs. Specifically, we apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation. Codes and pre-trained models can be found at this https URL.