Discover more from Deep Learning Weekly
Deep Learning Weekly Issue #153
Mobile AI at Etsy, Google's SpineNet, Bias in ML, CodeGuru from Amazon, and more...
This week in deep learning we bring you these lessons from the PULSE model and recent Twitter discussion, this proposed ban on government use of facial recognition software, and how COVID-19 is accelerating IoT and the need for distributed data storage.
You may also enjoy this summary of new developments in Apple's AI ecosystem, how Etsy is using Apple AI technology to help buyers visualize art on their walls, this interactive tool for learning about Convolutional Neural Networks (install and use the tool here), a new object detection architecture from Google called SpineNet, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
This article covers a Stanford PhD student’s perspective on the recent discussions of bias in machine learning sparked by tweets about the PULSE model.
The Facial Recognition and Biometric Technology Moratorium Act would explicitly ban police from using the technology.
As voice assistants like Google Assistant and Alexa increasingly make their way into internet of things devices, it’s becoming harder to track when audio recordings are sent to the cloud and who might gain access to them. To spot transgressions, researchers at the University of Darmstadt, North Carolina State University, and the University of Paris Saclay developed LeakyPick, a platform that periodically probes microphone-equipped devices and monitors subsequent network traffic for patterns indicating audio transmission.
Amazon Web Services Inc. said today its new Amazon CodeGuru service, which relies on machine learning to automatically check code for bugs and suggest fixes, is now generally available.
Creators of the 80 Million Tiny Images data set from MIT and NYU took the collection offline this week, apologized, and asked other researchers to refrain from using the data set and delete any existing copies.
Mobile + Edge
This blog post summarizes what’s new in Core ML and the other AI and ML technologies from the Apple ecosystem.
In this blog post, we show you how to leverage Firebase to enhance your deployment of TensorFlow Lite models in production.
The benefits of IoT products such as remote connected health monitoring solutions, packaging and shipping trackers, and streaming devices are more relevant in the pandemic.
Learn how Etsy built a feature that allows users to visualize wall art within their environments.
Refraction AI, a company developing semi-autonomous delivery robots, began handling select customers’ orders from Ann Arbor, Michigan’s Produce Station.
With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed towards comparing information processing in humans and machines. These works have the potential to deepen our understanding of the inner mechanisms of human perception and to improve machine learning. Drawing robust conclusions from comparison studies, however, turns out to be difficult. Here, we highlight common shortcomings that can easily lead to fragile conclusions.
In Google AI’s recent CVPR 2020 paper “SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization”, they propose a meta architecture called a scale-permuted model that enables two major improvements on backbone architecture design.
This post is a deep-dive into the interactive, TensorFlow.js-powered CNN Explainer which can help you understand how a simple CNN can be used for image classification.
Do you want to get the best version of your Machine Learning Model with Tensorflow? Start using callbacks now.
Libraries & Code
A library of tested, GPU implementations of core structured prediction algorithms for deep learning applications.
This is an official pytorch implementation of “Bottom-Up Human Pose Estimation by Ranking Heatmap-Guided Adaptive Keypoint Estimates” (paper).
Papers & Publications
Abstract: The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications. There are two kinds of quantization methods, training-based quantization and post-training quantization. Training-based approach suffers from a cumbersome training process, while post-training quantization may lead to unacceptable accuracy drop. In this paper, we present an efficient and simple post-training method via scale optimization, named EasyQuant (EQ),that could obtain comparable accuracy with the training-based method.Specifically, we first alternately optimize scales of weights and activations for all layers target at convolutional outputs to further obtain the high quantization precision. Then, we lower down bit width to INT7 both for weights and activations, and adopt INT16 intermediate storage and integer Winograd convolution implementation to accelerate inference.Experimental results on various computer vision tasks show that EQ outperforms the TensorRT method and can achieve near INT8 accuracy in 7 bits width post-training.
Abstract: Initialization, normalization, and skip connections are believed to be three indispensable techniques for training very deep convolutional neural networks and obtaining state-of-the-art performance. This paper shows that deep vanilla ConvNets without normalization nor skip connections can also be trained to achieve surprisingly good performance on standard image recognition benchmarks. This is achieved by enforcing the convolution kernels to be near isometric during initialization and training, as well as by using a variant of ReLU that is shifted towards being isometric. Further experiments show that if combined with skip connections, such near isometric networks can achieve performances on par with (for ImageNet) and better than (for COCO) the standard ResNet, even without normalization at all. Our code is available at this https URL.