Deep Learning Weekly: Issue #322
Meta announces 'universe of AI', Deploying Mistral 7B, Towards Monosemanticity with Anthropic, a paper on Representation Engineering: A Top-Down Approach to AI Transparency, and many more!
This week in deep learning, we bring you Meta announces 'universe of AI', Deploying Mistral 7B, Towards Monosemanticity with Anthropic, and a paper on Representation Engineering: A Top-Down Approach to AI Transparency.
You may also enjoy New tools to reduce the energy that models devour, Scaling Large (Language) Models with PyTorch Lightning, Tiny Language Models trained on Children's stories, a paper on The Stable Signature: Rooting Watermarks in Latent Diffusion Models, and more!
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Meta announces 'universe of AI' for Instagram, Facebook, WhatsApp
Mark Zuckerberg announced that Meta is launching massive AI updates across the company’s applications and devices, including Instagram, Facebook and WhatsApp.
AMD acquires open-source AI software developer Nod.ai
Advanced Micro Devices announced that it has acquired Nod.ai, a startup that develops open-source software for speeding up models.
Microsoft opens AI Co-Innovation Lab in San Francisco to empower Bay Area startups
Microsoft announced the opening of its fifth AI Co-Innovation Lab, which will provide startups and enterprises with access to AI experts, tools, and infrastructure.
New tools are available to help reduce the energy that AI models devour
At the Lincoln Laboratory Supercomputing Center, researchers are making changes to cut down on energy use. One of their techniques can reduce the energy of training AI models by 80 percent.
Analyzing the Security of Machine Learning Research Code
The NVIDIA AI Red Team’s analysis shows that ML researchers continue to use insecure coding practices despite public documentation about security risks and relatively frictionless and advanced security tooling.
MLOps & LLMOps
Scaling Large (Language) Models with PyTorch Lightning
A blog about techniques to train large models like Llama (or any LLM) and Stable Diffusion using distributed training strategy FSDP with PyTorch Lightning.
Enhancing customer churn prediction with continuous experiment tracking
Why customer churn matters and how to predict it with machine learning, explained step-by-step
Picking a vector database: a comparison and guide for 2023
A comparison of leading vector databases in terms of latency, support, pricing, and more.
Train and Deploy Mistral 7B with Hugging Face on Amazon SageMaker
A tutorial on how to fine-tune Mistral 7B using QLoRA and deploy it using the Hugging Face LLM Inference DLC.
Personalize your generative AI applications with Amazon SageMaker Feature Store
An article that elucidates the simple yet powerful idea of combining user profiles and item attributes to generate personalized content recommendations using LLMs.
Learning
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
Anthropic provides empirical evidence that there are more informative units of analysis than individual neurons.
Tiny Language Models Thrive With GPT-4 as a Teacher
To better understand how neural networks learn to simulate writing, researchers trained simpler versions on synthetic children’s stories.
Getting Started with Distributed Checkpoint (DCP)
A PyTorch tutorial on how to use Distributed Checkpoint APIs with a simple FSDP wrapped model.
Libraries & Code
Dev tool that writes scalable apps from scratch while the developer oversees the implementation
DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in project documentation.
ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools.
Papers & Publications
Think before you speak: Training Language Models With Pause Tokens
Abstract:
Language models generate responses by producing a series of tokens in immediate succession: the (K+1)th token is an outcome of manipulating K hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, K+10 hidden vectors, before it outputs the (K+1)th token? We operationalize this idea by performing training and inference on language models with a (learnable) pause token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate pause-training on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on 8 of 9 tasks, most prominently, a gain of 18% EM score on the QA task of SQuAD, 8% on CommonSenseQA and 1% accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm.
Representation Engineering: A Top-Down Approach to AI Transparency
Abstract:
In this paper, we introduce and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience. RepE places population-level representations, rather than neurons or circuits, at the center of analysis, equipping us with novel methods for monitoring and manipulating high-level cognitive phenomena in deep neural networks (DNNs). We provide baselines and an initial analysis of RepE techniques, showing that they offer simple yet effective solutions for improving our understanding and control of large language models. We showcase how these methods can provide traction on a wide range of safety-relevant problems, including truthfulness, memorization, power-seeking, and more, demonstrating the promise of representation-centered transparency research. We hope that this work catalyzes further exploration of RepE and fosters advancements in the transparency and safety of AI systems.
The Stable Signature: Rooting Watermarks in Latent Diffusion Models
Abstract:
Generative image modeling enables a wide range of applications but raises ethical concerns about responsible deployment. This paper introduces an active strategy combining image watermarking and Latent Diffusion Models. The goal is for all generated images to conceal an invisible watermark allowing for future detection and/or identification. The method quickly fine-tunes the latent decoder of the image generator, conditioned on a binary signature. A pre-trained watermark extractor recovers the hidden signature from any generated image and a statistical test then determines whether it comes from the generative model. We evaluate the invisibility and robustness of the watermarks on a variety of generation tasks, showing that Stable Signature works even after the images are modified. For instance, it detects the origin of an image generated from a text prompt, then cropped to keep 10% of the content, with 90+% accuracy at a false positive rate below 10−6.