Weekly Best
121.
GPT‑5.4 Mini and Nano (openai.com)
122.
Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training (github.com)
123.
GIMP 3.2 released (gimp.org)
124.
I'm 60 years old. Claude Code killed a passion
125.
Meta Horizon Worlds on Meta Quest is being discontinued (communityforums.atmeta.com)
126.
Write up of my homebrew CPU build (willwarren.com)
127.
Noq: n0's new QUIC implementation in Rust (iroh.computer)
128.
Drugwars for the TI-82/83/83 Calculators (2011) (gist.github.com)
129.
Marketing for Founders (github.com)
130.
Harold and George Destroy the World (tomclancy.info)
131.
Lazycut: A simple terminal video trimmer using FFmpeg (github.com)
132.
Cursor Composer 2 is just Kimi K2.5 with RL (twitter.com)
133.
'Pokémon Go' players unknowingly trained delivery robots with 30B images (popsci.com)
134.
What makes Intel Optane stand out (2023) (blog.zuthof.nl)
135.
Beyond has dropped “meat” from its name and expanded its high-protein drink line (plantbasednews.org)
136.
A most elegant TCP hole punching algorithm (robertsdotpm.github.io)
137.
Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster (blog.skypilot.co)
138.
OpenBSD: PF queues break the 4 Gbps barrier (undeadly.org)
139.
Ask HN: What is it like being in a CS major program these days?
140.
OpenSUSE Kalpa (kalpadesktop.org)
141.
Show HN: Han – A Korean programming language written in Rust (github.com)
142.
HP realizes that mandatory 15-minute support call wait times isn't good support (arstechnica.com)
143.
Grandparents are glued to their phones [video] (bbc.com)
144.
Silicon Valley's "Pronatalists" Killed WFH. The Strait of Hormuz Brought It Back (governance.fyi)
145.
The math that explains why bell curves are everywhere (quantamagazine.org)
146.
Bumblebee queens breathe underwater to survive drowning (smithsonianmag.com)
147.
Machine Payments Protocol (MPP) (stripe.com)
148.
Why AI systems don't learn – On autonomous learning from cognitive science (arxiv.org)
149.
What 81,000 people want from AI (anthropic.com)
150.
2% of ICML papers desk rejected because the authors used LLM in their reviews (blog.icml.cc)