Qwen3-Coder-Next, Inside OpenAI’s in-house data agent, a paper on Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text, and many more!
Deep learning itself is built on associative networks - layers of neurons making novel associations between data. The concepts you share about hybrid models reflect how intelligent systems learn through making new connections. Wrote about the cognitive version: https://substack.com/@diegobonifacino/note/p-187407942
Solid roundup this week! The Qwen3-Coder-Next numbers are pretty impressive, especially the throughput gains for agentic workflows. I've been workign with some smaller hybrid models recently and it's cool to see the direction this is heading. The Project Genie thing from Google sounds kinda neat too, but I'm curious how much compute it actually takes to run those interactive worlds. Anyway, thanks for putting these togther every week.
Deep learning itself is built on associative networks - layers of neurons making novel associations between data. The concepts you share about hybrid models reflect how intelligent systems learn through making new connections. Wrote about the cognitive version: https://substack.com/@diegobonifacino/note/p-187407942
CodeOCR is an interesting compression idea
Solid roundup this week! The Qwen3-Coder-Next numbers are pretty impressive, especially the throughput gains for agentic workflows. I've been workign with some smaller hybrid models recently and it's cool to see the direction this is heading. The Project Genie thing from Google sounds kinda neat too, but I'm curious how much compute it actually takes to run those interactive worlds. Anyway, thanks for putting these togther every week.