Deep Learning Weekly Issue #172
MinDiff from Google, on-device models with PyTorch, an AutoDesk acquisition, and more!
Hey folks,
This week in deep learning we bring you Google's MinDiff, a regularization technique for mitigating unfair biases, Synthesized Ltd.'s free tool for identifying and removing biased data, the research replication crisis in AI, Android's Neural Networks API that adds support for PyTorch to enable on-device AI processing.
You may also enjoy learning about how neural chips helped smartphones finally eclipse pro cameras, how you can design specific application-based neural networks in practice and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Autodesk acquires Spacemaker for $240 million to boost AI development for designers
Autodesk announced that it has acquired Oslo-based Spacemaker for $240 million.
Computer vision startup Chooch.ai scores $20M Series A
Chooch.ai, a startup that hopes to bring computer vision more broadly to companies to help them identify and tag elements at high speed, announced a $20 million Series A today.
Synthesized debuts a free tool for identifying and removing biased data
Artificial intelligence startup Synthesized Ltd. launched a tool for companies to detect and remove bias in the data they use for their AI projects.
AI is wrestling with a replication crisis
Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough.
The key to smarter robot collaborators may be more simplicity
A self-driving car might learn to maneuver more nimbly among human drivers if it didn’t get lost in the details of their every twist and turn.
Mobile + Edge
System brings deep learning to “internet of things” devices
Advance could enable artificial intelligence on household appliances while enhancing data security and energy efficiency.
Android's Neural Networks API adds support for PyTorch to enable on-device AI processing
Google’s team today added support for a new prototype feature that makes it possible for developers to perform hardware accelerated inference on mobile devices using the PyTorch artificial intelligence framework.
Improving On-Device Speech Recognition with VoiceFilter-Lite
This post covers Google’s VoiceFilter-Lite, an on-device voice separation solution for overlapping speech, enabling access to voice assistance technology even under noisy conditions and with limited network access.
In 2020, neural chips helped smartphones finally eclipse pro cameras
Thanks in large part to improved sensors and the neural cores in mobile processors made by Qualcomm and Apple, this was the year when standalone photo and video cameras were surpassed by smartphones in important ways.
Learning
Google & DeepMind Debut Benchmark for Long-Range Transformers
Google Research and DeepMind debut Long-Range Arena (LRA) benchmark for Transformer research on tasks with long sequence lengths.
Mitigating Unfair Bias in ML Models with the MinDiff Framework
Google announced the release of MinDiff, a new regularization technique available in the TensorFlow Model Remediation library for effectively and efficiently mitigating unfair biases when training machine learning models.
GANs with Keras and TensorFlow
In this tutorial you will learn how to implement Generative Adversarial Networks (GANs) using Keras and TensorFlow.
The Goldilocks Zone: Can you Design Neural Networks Just Right?
Learn about how you can design specific application-based neural networks in practice.
Libraries & Code
Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation
Papers & Publications
Reverse engineering learned optimizers reveals known and novel mechanisms
Abstract: Learned optimizers are algorithms that can themselves be trained to solve optimization problems. In contrast to baseline optimizers (such as momentum or Adam) that use simple update rules derived from theoretical principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations. Although this can lead to better performance in certain settings, their inner workings remain a mystery. How is a learned optimizer able to outperform a well tuned baseline? Has it learned a sophisticated combination of existing optimization techniques, or is it implementing completely new behavior? In this work, we address these questions by careful analysis and visualization of learned optimizers. We study learned optimizers trained from scratch on three disparate tasks, and discover that they have learned interpretable mechanisms, including: momentum, gradient clipping, learning rate schedules, and a new form of learning rate adaptation. Moreover, we show how the dynamics of learned optimizers enables these behaviors. Our results help elucidate the previously murky understanding of how learned optimizers work, and establish tools for interpreting future learned optimizers.
Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences
Abstract: Multi-agent reinforcement learning (MARL) has shown recent success in increasingly complex fixed-team zero-sum environments. However, the real world is not zero-sum nor does it have fixed teams; humans face numerous social dilemmas and must learn when to cooperate and when to compete. To successfully deploy agents into the human world, it may be important that they be able to understand and help in our conflicts. Unfortunately, selfish MARL agents typically fail when faced with social dilemmas. In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in. RUSP is generic and scalable; it can be applied to any multi-agent environment without changing the original underlying game dynamics or objectives. In particular, we show that with RUSP these behaviors can emerge and lead to higher social welfare equilibria in both classic abstract social dilemmas like Iterated Prisoner's Dilemma as well in more complex intertemporal environments.