Deep Learning Weekly: Issue #239
Meta AI's groundbreaking self-supervised computer vision model, Kustomer using Docker and Amazon Sagemaker for a text classification pipeline, Yann Lecun's autonomous intelligence proposal, and more.
Hey Folks,
This week in deep learning, we bring you Meta AI's groundbreaking self-supervised computer vision model, Kustomer using Docker and Amazon Sagemaker for a text classification pipeline, Yann Lecun's autonomous intelligence proposal and a paper on visual attention networks.
You may also enjoy deep learning for dinosaur fossils, Netflix from an Amazon MLE's perspective, the latest StyleGAN3 notebook, a paper on a graph-based deep learning framework for drug relation prediction, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
SEER 10B: Better, fairer computer vision through self-supervised learning on diverse datasets
Meta announces new advances in SEER (SElf-SupERvised), a groundbreaking self-supervised computer vision model that can learn directly from any random collection of images on the internet.
Constrained Reweighting for Training Deep Neural Nets with Noisy Labels
Google AI proposes a novel and principled method, named Constrained Instance reWeighting (CIW), with the goal of reducing the effect of potentially noisy examples.
Deep Learning Study Could Spark New Dinosaur Discoveries
The study, published in Frontiers in Earth Science, uses high-resolution Computed Tomography (CT) imaging combined with deep learning models to scan and evaluate dinosaur fossils.
RISC-V AI Chips Will Be Everywhere
The adoption of RISC-V, a free and open-source computer instruction set architecture first introduced in 2010, is taking off like a rocket. And much of the fuel is coming from demand for AI and machine learning.
Using artificial intelligence to find anomalies hiding in massive datasets
A new machine learning technique could pinpoint potential power grid failures or cascading traffic bottlenecks in real time.
MLOps
ML Troubleshooting Is Too Hard Today (But It Doesn't Have to Be That Way)
An article exploring performance monitoring in the lens of a machine learning engineer working on a fraud detection model.
An Amazon MLE’s comprehensive post discussing Netflix’s high level design, API design, and component design.
Considerations for Deploying Machine Learning Models in Production
An article discussing some common considerations, common pitfalls, and ML model serving patterns that are an essential part of your journey from model development to deployment in production.
USAA and Google Cloud transform insurance operations
Google’s delivery team explores the approach, architecture design, and underlying AI methodologies that helped USAA streamline their operations.
A post talking about how Kustomer uses custom Docker images for SageMaker training and inference With this approach, Kustomer’s business customers are automatically classifying over 50k support emails each month with up to 70% accuracy.
The Role of Containers in MLOps
An overview of how and when to use containers to help ML model deployment. Also includes a brief overview of Docker and Kubernetes.
Learning
Yann LeCun on a vision to make AI systems learn and reason like animals and humans
Meta AI is sharing some of LeCun’s ideas in brief, including his proposal for a modular, configurable architecture for autonomous intelligence, as well as key challenges the AI research community must address to build such a system.
What Is Meta-Learning via Learned Losses (with Python Code)
An in-depth and technical introduction to Meta AI’s Meta-Learning for optimizing models.
Predictive Maintenance with TinyAutomator and Edge Impulse
A blog post on how a predictive maintenance solution was created using Raspberry Pi, Tiny Automator, and Edge Impulse.
How to use Torchbearer for fitting ML models with PyTorch
An article discussing a library called Torchbearer, a model fitting library for PyTorch models.
Libraries & Code
TorchRec is a PyTorch domain library built to provide common sparsity and parallelism primitives needed for large-scale recommender systems (RecSys).
Third Time's the Charm? StyleGAN3 Inference Notebook
A comprehensive Colab notebook for StyleGAN3 inference.
Papers & Publications
Abstract:
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
DeepDrug: A general graph-based deep learning framework for drug relation prediction
Abstract:
Computational approaches for accurate predictions of drug-related interactions such as drug-drug interactions (DDIs) and drug-target interactions (DTIs) are highly demanding for biochemical researchers due to their efficiency and cost-effectiveness. Despite the fact that many methods have been proposed and developed to predict DDIs and DTIs respectively, their success is still limited due to a lack of systematic evaluation of the intrinsic properties embedded in their structure. In this paper, we develop a deep learning framework, named DeepDrug, to overcome these shortcomings by using graph convolutional networks to learn the graphical representations of drugs and proteins such as molecular fingerprints and residual structures in order to boost the prediction accuracy. We benchmark our methods in binary-class DDIs, multi-class DDIs and binary-class DTIs classification tasks using several datasets. We then demonstrate that DeepDrug outperforms other state-of-the-art published methods both in terms of accuracy and robustness in predicting DDIs and DTIs with varying ratios of positive to negative training data. Ultimately, we visualize the structural features learned by DeepDrug, which display compatible and accordant patterns in chemical properties, providing additional evidence to support the strong predictive power of DeepDrug. We believe that DeepDrug is an efficient tool in accurate prediction of DDIs and DTIs and provides a promising path in understanding the underlying mechanism of these biochemical relations.
Practical and Private (Deep) Learning without Sampling or Shuffling
Abstract:
We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification.