Discover more from Deep Learning Weekly
Deep Learning Weekly: Issue #280
Meta AI's data2vec 2.0, ML pipelines for the Crypto Industry, a deep dive into the fundamentals and inner mathematical workings of diffusion probabilistic models (DPMs), and more.
This week in deep learning, we bring you Meta AI's data2vec 2.0, ML pipelines for the Crypto Industry, a deep dive into the fundamentals and inner mathematical workings of diffusion probabilistic models (DPMs), and a paper on active learning with expected error reduction.
You may also enjoy Audi uses AI for Wheel Design, the top data labeling tools in 2023, measuring unstructured drift, a paper on high fidelity neural radiance fields at ultra high resolutions, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
McKinsey releases a 2022 review highlighting AI adoption, leaders, talent, and research.
Zappi, an AI-powered market research platform, announced that it has raised $170 million in new funding.
Meta AI shares data2vec 2.0, the first high-performance self-supervised algorithm to learn the same way for three different modalities: speech, vision, and text.
With FelGAN, Audi now employs software that uses AI to open up new sources of inspiration for designers at the Audi Design Studio.
An article giving an overview of MLOps and how it can be used to better streamline the machine learning model process.
An article that covers how the best MLOps practices can be utilized to lead a transformation process in a crypto company.
Seldon announces the latest update to Seldon Core V2, which empowers users to run multi-model serving, simplifies unified Kubernetes integrations, and many more.
An article that covers various types of data labeling companies, along with their history and functionalities, detailed feature sets, additional AI pipeline-oriented components, and much more.
A comprehensive article that recommends a global measure and method for unstructured drift.
Very deep neural networks can suffer from either vanishing or exploding gradients and this paper dives into both.
A deep dive into the fundamentals, main intuitions, and inner mathematical workings of diffusion probabilistic models (DPMs).
This article provides a holistic overview of sketch-based computer vision, starting from the unique characteristics of sketches followed by the application of sketches across various computer vision tasks.
A comprehensive article that brings light to one of the core reasons for the lack of neural networks in business practice.
Libraries & Code
River is a Python library for online machine learning. It aims to be the most user-friendly library for doing machine learning on streaming data.
Sematic is an open-source development toolkit to help Data Scientists and Machine Learning (ML) Engineers prototype and productionize ML pipelines in days not weeks.
PyNeuraLogic lets you use Python to write Differentiable Logic Programs.
Papers & Publications
Active learning has been studied extensively as a method for efficient data col- lection. Among the many approaches in literature, Expected Error Reduction (EER) Roy & McCallum (2001) has been shown to be an effective method for ac- tive learning: select the candidate sample that, in expectation, maximally decreases the error on an unlabeled set. However, EER requires the model to be retrained for every candidate sample and thus has not been widely used for modern deep neural networks due to this large computational cost. In this paper we reformulate EER under the lens of Bayesian active learning and derive a computationally efficient version that can use any Bayesian parameter sampling method (such as Gal & Ghahramani (2016)). We then compare the empirical performance of our method using Monte Carlo dropout for parameter sampling against state of the art methods in the deep active learning literature. Experiments are performed on four standard benchmark datasets and three WILDS datasets (Koh et al., 2021). The results indicate that our method outperforms all other methods except one in the data shift scenario – a model dependent, non-information theoretic method that requires an order of magnitude higher computational cost (Ash et al., 2019).
In this paper, we present a novel and effective framework, named 4K-NeRF, to pursue high fidelity view synthesis on the challenging scenarios of ultra high resolutions, building on the methodology of neural radiance fields (NeRF). The rendering procedure of NeRF-based methods typically relies on a pixel wise manner in which rays (or pixels) are treated independently on both training and inference phases, limiting its representational ability on describing subtle details especially when lifting to a extremely high resolution. We address the issue by better exploring ray correlation for enhancing high-frequency details benefiting from the use of geometry-aware local context. Particularly, we use the view-consistent encoder to model geometric information effectively in a lower resolution space and recover fine details through the view-consistent decoder, conditioned on ray features and depths estimated by the encoder. Joint training with patch-based sampling further facilitates our method incorporating the supervision from perception oriented regularization beyond pixel wise loss. Quantitative and qualitative comparisons with modern NeRF methods demonstrate that our method can significantly boost rendering quality for retaining high-frequency details, achieving the state-of-the-art visual quality on 4K ultra-high-resolution scenario.
The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on how to overcome or otherwise mitigate the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.