Deep Learning Weekly: Issue #237
Alibaba's open-source Kernel Neural Architecture Search, an in-depth article on data distributions and monitoring, graph hypernetworks, a paper on nested hierarchical transformers, and more!
This week in deep learning, we bring you Alibaba's open-source Kernel Neural Architecture Search, an in-depth article on data distributions and monitoring, graph hypernetworks, and a paper on nested hierarchical transformers.
You may also enjoy Google's Kaggle Challenge for the Diagnosis of Prostate Cancer, a Kubeflow tutorial, practical quantization in PyTorch, a paper on frame interpolation for large motion, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Researchers at Alibaba Group and Peking University have conducted a study to investigate a green NAS solution that evaluates architectures without training. To this end, they propose and open-source Kernel Neural Architecture Search (KNAS).
An International Scientific Challenge for the Diagnosis and Gleason Grading of Prostate Cancer
To help accelerate and enable more research in this area, Google Health, Radboud University Medical Center and Karolinska Institute joined forces to organize a global competition, the Prostate cANcer graDe Assessment (PANDA) Challenge, on the open Kaggle platform.
Boost your model's accuracy using self-supervised learning with TensorFlow Similarity
TensorFlow Similarity now supports key self-supervised learning algorithms to help you boost your model’s accuracy when you don’t have a lot of labeled data.
MIT Research advances technology of AI assistance for anesthesiologists
A new deep learning algorithm trained to optimize doses of propofol to maintain unconsciousness during general anesthesia could augment patient monitoring.
Intel teams up with Benteler and Beep to develop self-driving shuttles
Intel Corp. announces that it’s teaming up with auto parts giant Benteler International AG and transportation startup Beep Inc. to deploy fully autonomous shuttles in the U.S.
MLOps
Convergence 2022: A New, Free-to-Attend Machine Learning Conference
In this new, one-day virtual event, attendees will discover emerging tools, approaches, and workflows that can help effectively manage ML projects from start to finish. Choose from business and technical tracks with presentations from experts in data science and machine learning.
Data Distribution Shifts and Monitoring
Chip Huyen’s in-depth article discussing the different types of distribution shifts, and the monitoring tools and techniques that work with each case.
Build your first ML pipeline in Kubeflow
An article covering Kubeflow theoretically and practically by implementing a pipeline from a particular Jupyter Notebook.
H2O.ai now provides data scientists and ML engineers with the ability to deploy model explanations in production and to use enhanced model management techniques.
Introducing MLServer 1.0: Modern and flexible model serving for machine learning at scale
Seldon announces the full release of MLServer 1.0, an open-source ML inference server for models leveraging the Scikit-Learn, XGBoost, MLlib, LightGBM, Seldon Tempo, and MLflow frameworks.
How to Write Test Code for a Data Science Pipeline
A technical blog highlighting the typical data science pipeline, consisting of small dedicated functions, and how to write a Pytest module for it.
Learning
Researchers Build AI That Builds AI
By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training.
Practical Quantization in PyTorch
In this blog post, we’ll lay a foundation of quantization in deep learning, and then take a look at how each technique looks like in practice.
Fine-Tune ViT for Image Classification with HuggingFace Transformers
In this blog post, we'll walk through how to leverage Hugging Face datasets to download and process image classification datasets, and then use them to fine-tune a pre-trained ViT with Hugging Face transformers.
Improving Inference Speeds of Transformer Models
In this blog, we will look at various techniques like Mixed Precision Training, Patience Based Early Exit (PABEE), and Knowledge Distillation in order to build faster Deep Learning models.
How AI is Changing Chemical Discovery
An article explaining the shift in the chemical discovery process, including molecular design and material discovery, which is led by current deep learning techniques.
Libraries & Code
EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Built on top of the JAX library, this toolkit enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs.
openai/glide-text2im: GLIDE: a diffusion-based text-conditional image synthesis model
The official codebase for running the small, filtered-data GLIDE model from the paper. This includes detailed usage examples of text2im, inpaint, and clip_guided in notebook format.
A Python package for concise, transparent, and accurate predictive modeling. All sklearn-compatible and easy to use.
Papers & Publications
FILM: Frame Interpolation for Large Motion
Abstract:
We present a frame interpolation algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion. Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis. This is often complex and requires scarce optical flow or depth ground-truth. In this work, we present a single unified network, distinguished by a multi-scale feature extractor that shares weights at all scales, and is trainable from frames alone. To synthesize crisp and pleasing frames, we propose to optimize our network with the Gram matrix loss that measures the correlation difference between feature maps. Our approach outperforms state-of-the-art methods on the Xiph large motion benchmark. We also achieve higher scores on Vimeo-90K, Middlebury and UCF101, when comparing to methods that use perceptual losses. We study the effect of weight sharing and of training with datasets of increasing motion range. Finally, we demonstrate our model's effectiveness in synthesizing high quality and temporally coherent videos on a challenging near-duplicate photos dataset.
Abstract:
Hierarchical structures are popular in recent vision transformers, however, they require sophisticated designs and massive datasets to work well. In this paper, we explore the idea of nesting basic local transformers on non-overlapping image blocks and aggregating them in a hierarchical way. We find that the block aggregation function plays a critical role in enabling cross-block non-local information communication. This observation leads us to design a simplified architecture that requires minor code changes upon the original vision transformer. The benefits of the proposed judiciously-selected design are threefold: (1) NesT converges faster and requires much less training data to achieve good generalization on both ImageNet and small datasets like CIFAR; (2) when extending our key ideas to image generation, NesT leads to a strong decoder that is 8× faster than previous transformer-based generators; and (3) we show that decoupling the feature learning and abstraction processes via this nested hierarchy in our design enables constructing a novel method (named GradCAT) for visually interpreting the learned model.