Deep Learning Weekly

Share this post

Deep Learning Weekly Issue #178

www.deeplearningweekly.com

Deep Learning Weekly Issue #178

Part 2 of our 2020 deep learning recap

Matthew Moellman
Dec 30, 2020
Share this post

Deep Learning Weekly Issue #178

www.deeplearningweekly.com

Hey folks,

This week in deep learning we bring you Part 2 of our two-part 2020: Year in Review. We combed through every Deep Learning Weekly issue from 2020 and selected some of our favorite and some of the most popular stories of the year.

As always, happy reading and hacking. If you have something you think should be in the first issue of 2021, find us on Twitter: @dl_weekly.

Until next year!

Industry

Deepfakes Are Becoming the Hot New Corporate Training Tool

Coronavirus restrictions make it harder and more expensive to shoot videos. So some companies are turning to synthetic media instead.

Google’s TF-Coder tool automates machine learning model design

Researchers at Google Brain developed an automated tool for programming in machine learning frameworks like TensorFlow. They say it achieves better-than-human performance on some challenging development tasks.

40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers

GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.

The startup making deep learning possible without specialized hardware

GPUs have long been the chip of choice for performing AI tasks. Neural Magic wants to change that.

OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless

GPT-3 is the largest language model ever created and can generate amazing human-like text on demand but won't bring us closer to true intelligence.

Shrinking deep learning's carbon footprint

Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence.

AlphaFold: a solution to a 50-year-old grand challenge in biology

In a major scientific breakthrough, the latest version of AlphaFold has been recognized as a solution to one of biology's grand challenges - the “protein folding problem”.

In firing Timnit Gebru, Google puts commercial interests ahead of ethics

Leading AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices.

Mobile + Edge

Blur tools for Signal

The messaging app Signal introduced a new face-blurring feature to protect people’s privacy.

Announcing Fritz AI’s Support for SnapML in Lens Studio

Enhance your Snapchat AR Lenses with machine learning.

Here’s why Apple believes it’s an AI leader—and why it says critics have it all wrong

Apple AI chief and ex-Googler John Giannandrea dives into the details with Ars Technica.

How Duolingo uses AI in every part of its app

This article is a close look at how Duolingo uses AI in all aspects of the app, including the AI behind Stories, Smart Tips, podcasts, reports, and even notifications.

The Future of Machine Learning: An Interview with Daniel Situnayake

Check out this interview for Daniel Situnayake’s perspective on all things TinyML.

In 2020, neural chips helped smartphones finally eclipse pro cameras

Thanks in large part to improved sensors and the neural cores in mobile processors made by Qualcomm and Apple, this was the year when standalone photo and video cameras were surpassed by smartphones in important ways.

Core ML and Vision Tutorial: On-device training on iOS

This tutorial introduces you to Core ML and Vision, two cutting-edge iOS frameworks, and how to fine-tune a model on the device.

Learning

How Big Should My Language Model Be?

One surprising scaling effect of deep learning is that bigger neural networks are actually compute-efficient. Given a training budget, this tool determines how big a model should be.

A Sober Look at Bayesian Neural Networks

Asking the question “Do Bayesian Neural Networks make sense?”

[Reddit] Advanced courses update

The list of advanced ML and deep learning courses on the r/MachineLearning sidebar has been updated.

A foolproof way to shrink deep learning models

New pruning technique from researchers at MIT is both simple and effective. Train, prune, retrain, and repeat.

NLP and Computer Vision Tutorials on TensorFlow Hub

TensorFlow Hub tutorials to help you get started with using and adapting pre-trained machine learning models to your needs.

Shortcuts: How Neural Networks Love to Cheat

In this article, the authors of Shortcut Learning in Deep Neural Networks dive into the idea of “shortcut learning” and how many difficulties in deep learning can be seen as symptoms of this underlying problem.

Interpretability in Machine Learning: An Overview

This essay provides a broad overview of the sub-field of machine learning interpretability.

Papers & Publications

A Gentle Introduction to Deep Learning for Graphs

The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field.

A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications

Generative adversarial networks (GANs) are a hot research topic recently. GANs have been widely studied since 2014, and a large number of algorithms have been proposed. However, there is few comprehensive study explaining the connections among different GANs variants, and how they have evolved. In this paper, we attempt to provide a review on various GANs methods from the perspectives of algorithms, theory, and applications. Firstly, the motivations, mathematical representations, and structure of most GANs algorithms are introduced in details. Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning. This paper compares the commonalities and differences of these GANs methods. Secondly, theoretical issues related to GANs are investigated. Thirdly, typical applications of GANs in image processing and computer vision, natural language processing, music, speech and audio, medical field, and data science are illustrated. Finally, the future open research problems for GANs are pointed out

CVPR 2020 Best Papers Announced

The 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) has announced its best paper awards.

Share this post

Deep Learning Weekly Issue #178

www.deeplearningweekly.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Deep Learning Weekly
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing