Deep Learning Weekly Issue #169
Adobe's new AL-powered video blur, making chatbots that aren't racist or sexist, new ML Perf benchmarks, & more
Hey folks,
This week in deep learning we bring you Adobe's Project Sharp Shots which uses AI to deblur your videos with one click, Kite's expansion of its AI code completion support from 2 to 13 programming languages, and how to make a chatbot that isn’t racist or sexist.
You may also enjoy Google's smart displays that activate without a wake word, Amazon's use of end-to-end models to improve Alexa’s speech recognition, the latest MLPerf performance results and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
How to make a chatbot that isn’t racist or sexist
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use?
Kite expands its AI code completions from 2 to 13 programming languages
In addition to Python and JavaScript, Kite’s AI-powered code completions now support TypeScript, Java, HTML, CSS, Go, C, C#, C++, Objective C, Kotlin, and Scala.
Adobe's Project Sharp Shots uses AI to deblur your videos with one click
Powered by Adobe’s Sensei AI platform, Sharp Shots is a research project that uses AI to deblur videos.
Automatic signature verification software threatens to disenfranchise U.S. voters
Faulty algorithms are more likely to throw out votes for certain groups of people, especially those who have undergone a name change.
Researchers suggest AI can learn common sense from animals
AI researchers developing reinforcement learning agents could learn a lot from animals. That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.
Mobile + Edge
MLPerf Releases Over 1,200 results for leading ML inference systems and new Mobile MLPerf app
The MLPerf consortium released results for MLPerf Inference v0.7, the second round of submissions to their machine learning inference performance benchmark suite that measures how quickly a trained neural network can process new data for a wide range of applications on a variety of form factors.
[Paper] TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems
Deep learning inference on embedded devices is a burgeoning field with myriad applications because tiny embedded devices are omnipresent. This paper explains the design decisions behind TF Micro and describes its implementation details.
Google tests smart displays that activate without a wake word
A new feature codenamed “Blue Steel” could allow devices to simply sense your presence, and proactively listen for commands without first needing to hear the wake word.
Amazon embraces end-to-end models to improve Alexa’s speech recognition
Alexa is now running “full-capability” speech recognition on-device, after previously relying on models many gigabytes in size that required huge amounts of memory and ran on servers in the cloud.
Learning
Google Brain Sets New Semi-Supervised Learning SOTA in Speech Recognition
Google Brain has improved the SOTA on the LibriSpeech automatic speech recognition task, with their score of 1.4 percent/ 2.6 percent word-error-rates.
Targeted adversarial attacks with Keras and TensorFlow
In this tutorial, you will learn how to perform targeted adversarial attacks and construct targeted adversarial images (last week, the tutorial covered untargeted adversarial images) using Keras, TensorFlow, and Deep Learning.
Image Restoration with GANs
Using Generative Adversarial Networks to restore image quality.
Libraries & Code
[GitHub] lucidrains/lambda-networks
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute.
[GitHub] google-research/multilingual-t5
Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model, trained following a similar recipe as T5. This repo can be used to reproduce the experiments in the mT5 paper.
Papers & Publications
Fourier Neural Operator for Parametric Partial Differential Equations
Abstract: The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers.
The Turking Test: Can Language Models Understand Instructions?
Abstract: Supervised machine learning provides the learner with a set of input-output examples of the target task. Humans, however, can also learn to perform new tasks from instructions in natural language. Can machines learn to understand instructions as well? We present the Turking Test, which examines a model's ability to follow natural language instructions of varying complexity. These range from simple tasks, like retrieving the nth word of a sentence, to ones that require creativity, such as generating examples for SNLI and SQuAD in place of human intelligence workers ("turkers"). Despite our lenient evaluation methodology, we observe that a large pretrained language model performs poorly across all tasks. Analyzing the model's error patterns reveals that the model tends to ignore explicit instructions and often generates outputs that cannot be construed as an attempt to solve the task. While it is not yet clear whether instruction understanding can be captured by traditional language models, the sheer expressivity of instruction understanding makes it an appealing alternative to the rising few-shot inference paradigm.