Deep Learning Weekly Issue #172
MinDiff from Google, on-device models with PyTorch, an AutoDesk acquisition, and more!
|Matthew Moellman||Nov 18, 2020|
This week in deep learning we bring you Google's MinDiff, a regularization technique for mitigating unfair biases, Synthesized Ltd.'s free tool for identifying and removing biased data, the research replication crisis in AI, Android's Neural Networks API that adds support for PyTorch to enable on-device AI processing.
You may also enjoy learning about how neural chips helped smartphones finally eclipse pro cameras, how you can design specific application-based neural networks in practice and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Autodesk announced that it has acquired Oslo-based Spacemaker for $240 million.
Chooch.ai, a startup that hopes to bring computer vision more broadly to companies to help them identify and tag elements at high speed, announced a $20 million Series A today.
Artificial intelligence startup Synthesized Ltd. launched a tool for companies to detect and remove bias in the data they use for their AI projects.
Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough.
A self-driving car might learn to maneuver more nimbly among human drivers if it didn’t get lost in the details of their every twist and turn.
Mobile + Edge
Advance could enable artificial intelligence on household appliances while enhancing data security and energy efficiency.
Google’s team today added support for a new prototype feature that makes it possible for developers to perform hardware accelerated inference on mobile devices using the PyTorch artificial intelligence framework.
This post covers Google’s VoiceFilter-Lite, an on-device voice separation solution for overlapping speech, enabling access to voice assistance technology even under noisy conditions and with limited network access.
Thanks in large part to improved sensors and the neural cores in mobile processors made by Qualcomm and Apple, this was the year when standalone photo and video cameras were surpassed by smartphones in important ways.
Google Research and DeepMind debut Long-Range Arena (LRA) benchmark for Transformer research on tasks with long sequence lengths.
Google announced the release of MinDiff, a new regularization technique available in the TensorFlow Model Remediation library for effectively and efficiently mitigating unfair biases when training machine learning models.
In this tutorial you will learn how to implement Generative Adversarial Networks (GANs) using Keras and TensorFlow.
Learn about how you can design specific application-based neural networks in practice.
Libraries & Code
Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation
Papers & Publications
Abstract: Learned optimizers are algorithms that can themselves be trained to solve optimization problems. In contrast to baseline optimizers (such as momentum or Adam) that use simple update rules derived from theoretical principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations. Although this can lead to better performance in certain settings, their inner workings remain a mystery. How is a learned optimizer able to outperform a well tuned baseline? Has it learned a sophisticated combination of existing optimization techniques, or is it implementing completely new behavior? In this work, we address these questions by careful analysis and visualization of learned optimizers. We study learned optimizers trained from scratch on three disparate tasks, and discover that they have learned interpretable mechanisms, including: momentum, gradient clipping, learning rate schedules, and a new form of learning rate adaptation. Moreover, we show how the dynamics of learned optimizers enables these behaviors. Our results help elucidate the previously murky understanding of how learned optimizers work, and establish tools for interpreting future learned optimizers.
Abstract: Multi-agent reinforcement learning (MARL) has shown recent success in increasingly complex fixed-team zero-sum environments. However, the real world is not zero-sum nor does it have fixed teams; humans face numerous social dilemmas and must learn when to cooperate and when to compete. To successfully deploy agents into the human world, it may be important that they be able to understand and help in our conflicts. Unfortunately, selfish MARL agents typically fail when faced with social dilemmas. In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in. RUSP is generic and scalable; it can be applied to any multi-agent environment without changing the original underlying game dynamics or objectives. In particular, we show that with RUSP these behaviors can emerge and lead to higher social welfare equilibria in both classic abstract social dilemmas like Iterated Prisoner's Dilemma as well in more complex intertemporal environments.