Deep Learning Weekly Issue #163
Nvidia's ARM acquisition, Apple's A14 Bionic chip, new standards for AI clinical trials, delivery drones, & more
This week in deep learning we bring you Nvidia’s $40 billion Arm acquisition, Apple's A14 Bionic processor with 40% faster CPU and 11.8 billion transistors, new standards for AI clinical trials, high-speed autonomous delivery drones, and Microsoft's AI tool that can train models with a trillion parameters.
You may also enjoy learning about the link between Transformers and Graph Neural Networks, the role of individual nits in a deep neural network, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Microsoft AI tool enables 'extremely large' models with a trillion parameters
Microsoft Corp. has released a new version of its open-source DeepSpeed tool that it says will enable the creation of deep learning models with a trillion parameters.
Google claims its AI is becoming better at recognizing breaking news and misinformation
Google says it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google recently launched an update using BERT-based language understanding models to improve the matching between news stories and available fact checks.
Volansi raises $50 million for high-speed autonomous delivery drones
Volansi (formerly Volans-i), which provides vertical takeoff and landing drone delivery services for commercial and defense customers, announced a $50 million round.
New standards for AI clinical trials will help spot snake oil and hype
The guidelines ensure that medical AI research is subject to the same scrutiny as drug development and diagnostic tests.
Mobile + Edge
Nvidia’s $40 billion Arm acquisition is about bringing AI down from the cloud
Nvidia’s $40 billion acquisition of Arm is a hugely significant deal for the tech world, with implications that will take years to unravel spanning many areas of the sector.
Apple unveils A14 Bionic processor with 40% faster CPU and 11.8 billion transistors
Apple unveiled its new A14 Bionic processor with the aim of pushing ahead of other smartphone and tablet vendors on computing power and artificial intelligence processing.
Google Meet hardware promises AI features starting at $2,700
Google unveiled Google Meet Series One, meeting room hardware that brings its AI-powered video calling features to businesses. The Series One compute system and sound bar both include the company’s Coral Accelerator Module with Google Edge TPUs to drive the audio and video.
Ambiq says new processors use one-tenth the power of rival chips
Ambiq Micro Inc. introduced the fourth generation of its Apollo processor line that can enable some wearable, tracking, and healthcare devices to run for months on a single charge.
Introduction to TFLite On-device Recommendation
In this introduction to TFLite on-device recommendations, you’ll learn how to build on-device models that provide personalized, low-latency recommendations.
Transformers are Graph Neural Networks
In this post, the author establishes a link between Graph Neural Networks (GNNs) and Transformers.
Training AI with CGI
This post shows how to train a computer vision model to identify components on a raspberry pi board using only synthetic training data.
Latent graph neural networks: Manifold learning 2.0?
Graph neural networks exploit relational inductive biases for data that come in the form of a graph. However, in many cases we do not have the graph readily available. Can graph deep learning still be applied in this case?
Fast, efficient, open-access datasets and evaluation metrics for Natural Language Processing and more in PyTorch, TensorFlow, NumPy and Pandas.
Libraries & Code
This repository contains the three environments introduced in “Physically Embedded Planning Problems: New Challenges for Reinforcement Learning”
Code for the Proceedings of the National Academy of Sciences 2020 article, “Understanding the Role of Individual Units in a Deep Neural Network”
Papers & Publications
Understanding the Role of Individual Units in a Deep Neural Network
Abstract: Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest CT Images
The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients with worsening respiratory status or developing complications that require expedited care, and patients suspected to be COVID-19-positive but have negative RT-PCR test results. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behaviour of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.