digest
Two threads feel especially worth watching today. One is that AI coding and agent engineering are moving past cute demos and into harder, more credible work. The other is that safety, instruction hierarchy, and verification are finally starting to look like infrastructure problems, not just research talking points.
Coding After Coders: AI-assisted programming is splitting developers into two camps Source: Simon Willison
Clive Thompson’s piece captures a real split in software right now: one camp sees AI as a force multiplier, while the other still treats hand-written code as a core part of the craft. Simon argues that programmers are relatively lucky because code can still be tested against reality. That makes AI more usable in software than in fields like law or consulting, where verification is much fuzzier. The more unsettling question is not whether AI can write code. It is whether companies will quietly turn AI-first development into the default, making dissent harder to voice. My take: I mostly agree with Simon here. Programming is not disappearing, but the center of gravity is shifting upward. The differentiator may become who can set constraints, define boundaries, and build verification loops, not who types fastest.
13 Mar 2026
digest
This edition covers news from 03-08 to 03-10.
A few threads stood out today. OpenAI is moving deeper into the AI safety toolchain. Anthropic published one of the more useful pieces I’ve seen lately on how benchmark scores get distorted by infrastructure. And Simon Willison wrote the kind of database post that makes engineers want to try it immediately.
OpenAI is acquiring Promptfoo and pulling AI security closer to the core product stack Source: OpenAI News
Link: https://openai.com/index/openai-to-acquire-promptfoo
10 Mar 2026
digest
Covering 02-25 ~ 03-01: OpenAI signs DoW contract, Claude memory import is just a prompt, Anthropic introspection research, Google Nano Banana 2, and more.
02 Mar 2026
digest
This edition covers news from Feb 27–28
🏛️ AI & Government Trump Administration Bans Anthropic from Government Systems, Pentagon Designates Supply Chain Risk Source: NPR
Arguably the biggest AI story of the week. President Trump signed an executive order banning US government use of Anthropic’s products, while the Pentagon simultaneously designated Anthropic as a “supply chain risk entity”—a label historically reserved for US adversaries and never before publicly applied to an American company.
28 Feb 2026
digest
Anthropic publicly defies the Department of War over safety guardrails; Google launches Nano Banana 2 image model; Perplexity ships 19-model AI Computer; Simon Willison exposes Google API key security shift
27 Feb 2026