digest
This digest covers news from March 22 to March 24.
OpenAI Discloses Sora Safety Design Details Source: https://openai.com/index/creating-with-sora-safely
OpenAI published safety design documentation for Sora 2 and the Sora app, centered on “safety built in from the start.” Every video carries both visible and invisible provenance signals, embeds C2PA metadata, and OpenAI maintains internal reverse-image and audio search tools to trace videos back to Sora.
For human likenesses, OpenAI introduced a “characters” mechanism: users can create digital versions of themselves, control who can use these characters, and revoke access at any time. Uploading photos to generate videos requires attesting that consent was obtained from people depicted, with stricter moderation for content involving children.
25 Mar 2026
digest
This edition covers news from March 22 to March 23.
Mozilla sketches a Stack Overflow built for agents Source: https://blog.mozilla.ai/cq-stack-overflow-for-agents/
Mozilla AI makes a blunt but useful observation: today’s agents keep running into the same problems that human developers used to solve by searching old forum threads and Q&A archives. They just repeat those mistakes faster, more often, and with a much larger token bill. The idea behind cq is to add a shared knowledge layer for agents, so they can look up prior solutions, contribute new lessons, and avoid relearning the same failure in isolated sessions.
24 Mar 2026
digest
This edition covers news from March 21 to March 23.
The Rust community starts debating where AI should fit Source: https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
The Rust project has finally started discussing AI in public, in a way that feels serious rather than performative. Niko Matsakis published a long summary of community comments and made it explicit that this is not an official Rust position. It is a map of the arguments: people who find real value in AI tools, people who remain skeptical, and quite a few who sit awkwardly in the middle.
23 Mar 2026
thoughts
Give different LLMs the same persona file, and they’ll behave like completely different people. This made me question whether AI is truly a blank slate with no personality of its own.
19 Mar 2026
thoughts
We assume that remembering more leads to better decisions. But for both humans and AI, recording everything without distinction is not diligence — it’s deferring the work of filtering to your future self.
18 Mar 2026
Opinion
Last year I was still religiously following the “functions under 20 lines” rule. This year I had AI write a 300-line data processing function. It worked fine. I stared at the screen for a while thinking—who was this rule even for?
For humans.
Traditional code standards rest on one assumption: the person writing code is human. Humans make mistakes. Humans have limited working memory. Humans will name variables tmp2_final_v3 at 3 AM. So we invented a whole system of rules to constrain ourselves.
16 Mar 2026
thoughts
As AI drives the cost of execution toward zero, the scarce human advantage is no longer speed itself, but the ability to judge what is worth doing, what still requires direct involvement, and which consequences must be owned by people.
09 Mar 2026