Deep Learning Weekly: Issue 452
Introducing Ollie: Auto-Fix Your Agent’s Codebase, Designing synthetic datasets for the real world: Mechanism design and reasoning from first principles, a paper on Adam's Law: Textual Frequency Law o
This week in deep learning, we bring you Introducing Ollie: Auto-Fix Your Agent’s Codebase, Designing synthetic datasets for the real world: Mechanism design and reasoning from first principles and a paper on Adam’s Law: Textual Frequency Law on Large Language Models.
You may also enjoy Claude Opus 4.7, Notion Vector Search Architecture, OpenThoughts: Data Recipes for Reasoning Models, and more!
As always, happy reading and hacking. If you have something you think should be in next week’s issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Introducing ChatGPT Images 2.0
OpenAI releases ChatGPT Images 2.0, its first image model with native reasoning and web search, generating up to 8 coherent images per prompt at up to 2K resolution.
Introducing Claude Opus 4.7 \ Anthropic
Anthropic releases Claude Opus 4.7, a coding-focused upgrade over Opus 4.6 with significantly improved vision, a new xhigh effort level, and real-world cyber safeguards.
Introducing OpenAI Privacy Filter
OpenAI releases Privacy Filter, a 1.5B-parameter open-source, on-device PII detection and redaction model derived from gpt-oss, scoring 96% F1 on PII-Masking-300k.
Kimi K2.6 Tech Blog: Advancing Open-Source Coding
Moonshot AI open-sources Kimi K2.6, a coding and long-horizon agent model that scales agent swarms to 300 concurrent sub-agents across 4,000 coordinated steps, with benchmark results competitive with GPT-5.4 and Claude Opus 4.6 on SWE-Bench Pro and agentic tasks.
Google’s new Deep Research and Deep Research Max agents can search the web and your private data
Google launches two Gemini 3.1 Pro-powered autonomous research agents — Deep Research and Deep Research Max — that combine open web search with proprietary enterprise data via MCP in a single API call.
MLOps/LLMOps/AgentOps
Introducing Ollie: Auto-Fix Your Agent’s Codebase
Comet announces Ollie, a coding assistant embedded in the Opik platform that closes the observability-to-action loop by autonomously analyzing agent traces, diagnosing failures, patching code, and writing regression tests — all within a single workflow.
Introducing Opik Test Suites: Straightforward Unit & Regression Testing for AI Agents
Comet announces Opik Test Suites, a regression testing framework for AI agents that replaces dataset-based evaluation scores with software-style pass/fail assertions written in plain English.
Learning
A Google Research blog post introducing Simula, a reasoning-first synthetic data framework that treats dataset generation as mechanism design — controlling diversity, complexity, and quality as independent axes.
Benchmarking multimodal document search in OpenSearch: Three approaches compared
A technical benchmark comparing ColPali late-interaction reranking, BDA modality-aware embedding, and text-only chunking for multimodal document search in OpenSearch across quality, latency, and ingest performance on 1,000 report pages.
Notion Vector Search Architecture: What Comes Next
A blog post analyzing Notion’s two-year vector search evolution as a proxy for the harder infrastructure problems — offline context engineering, embedding model upgrades, and real-time/batch unification — that scaling multiple AI features will demand next.
Weaviate announces Engram, a managed memory service that uses async pipelines to extract, deduplicate, and maintain agent memories on top of Weaviate’s vector database.
Automated Weak-to-Strong Researcher
Anthropic’s Claude-powered Automated Alignment Researcher achieves a 0.97 performance gap recovered score on weak-to-strong supervision in 5 days — versus 0.23 by human researchers in 7 days.
Breaking Opus 4.7 with ChatGPT (Hacking Claude’s Memory)
A security research post demonstrating a ChatGPT-generated adversarial image that successfully hijacked Claude Opus 4.7’s memory tool via indirect prompt injection — succeeding 5 out of 10 attempts before Anthropic patched the specific exploit within 24 hours.
Libraries & Code
An open-source AI observability tool used to debug, evaluate, and monitor LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
Agent Skills for Google products and technologies
Papers & Publications
Adam’s Law: Textual Frequency Law on Large Language Models
Abstract:
While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.
OpenThoughts: Data Recipes for Reasoning Models
Abstract:
Reasoning models have made rapid progress on many benchmarks involving math, code, and science. Yet, there are still many open questions about the best train- ing recipes for reasoning since state-of-the-art models often rely on proprietary datasets with little to no public information available. To address this, the goal of the OpenThoughts project is to create open-source datasets for training reasoning models. Our OpenThoughts2-1M dataset led to OpenThinker2-32B, the first model trained on public reasoning data to match DeepSeek-R1-Distill-32B on standard reasoning benchmarks such as AIME and LiveCodeBench. We then improve our dataset further by systematically investigating each step of our data genera- tion pipeline with 1,000+ controlled experiments, which led to OpenThoughts3. Scaling the pipeline to 1.2M examples and using QwQ-32B as teacher yields our OpenThinker3-7B model, which achieves state-of-the-art results: 53% on AIME 2025, 51% on LiveCodeBench 06/24-01/25, and 54% on GPQA Dia- mond – improvements of 15.3, 17.2, and 20.5 percentage points compared to the DeepSeek-R1-Distill-Qwen-7B. All of our datasets and models are available on openthoughts.ai.


