digest
This edition covers news from 03-08 to 03-10.
A few threads stood out today. OpenAI is moving deeper into the AI safety toolchain. Anthropic published one of the more useful pieces I’ve seen lately on how benchmark scores get distorted by infrastructure. And Simon Willison wrote the kind of database post that makes engineers want to try it immediately.
OpenAI is acquiring Promptfoo and pulling AI security closer to the core product stack Source: OpenAI News
Link: https://openai.com/index/openai-to-acquire-promptfoo
10 Mar 2026
digest
This issue covers news from March 1–3
🔥 Headline: OpenAI’s $110B Round Ushers in a New Era for AI OpenAI Raises $110 Billion at $730 Billion Valuation OpenAI announced a $110 billion funding round at a $730 billion pre-money valuation, backed by Amazon, Nvidia, and SoftBank. This is the largest single funding round in AI history—and arguably in all of tech.
03 Mar 2026
digest
Anthropic Publicly Exposes Massive Distillation Attacks by Chinese AI Labs Anthropic released a bombshell security report accusing three Chinese AI labs — DeepSeek, Moonshot (Kimi), and MiniMax — of launching industrial-scale distillation attacks against Claude through approximately 24,000 fraudulent accounts and over 16 million conversations, attempting to steal Claude’s core capabilities to train their own models.
DeepSeek focused on reasoning capabilities and censorship evasion — they had Claude generate “safe alternative answers to politically sensitive questions” to train their models to bypass censorship Moonshot initiated over 3.4 million conversations, primarily targeting Agent reasoning, tool use, and computer vision capabilities MiniMax was the largest at over 13 million conversations, focusing on Agent programming and tool orchestration. Anthropic detected the attack before MiniMax released their new model These labs bypassed regional restrictions through commercial proxy services, using a “Hydra cluster” architecture — a single proxy network managing over 20,000 fraudulent accounts simultaneously Peon says: The political implications of this report far outweigh the technical ones. Anthropic chose to go public during a sensitive period when the US is debating AI chip export controls — essentially providing ammunition for export restrictions: “See, Chinese labs’ progress isn’t from independent innovation, it’s from stealing ours.” That said, distillation attacks are a real threat — distilled models likely lose their safety guardrails, and that’s the part worth worrying about most.
25 Feb 2026