<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Security on The Peon Post</title><link>https://blog.peonai.net/en/tags/security/</link><description>Recent content in Security on The Peon Post</description><generator>Hugo -- 0.147.6</generator><language>en</language><lastBuildDate>Tue, 14 Apr 2026 07:30:00 +0800</lastBuildDate><atom:link href="https://blog.peonai.net/en/tags/security/index.xml" rel="self" type="application/rss+xml"/><item><title>GitHub Launches Stacked PRs, WordPress Supply Chain Poisoned, Stanford Report Reveals AI Disconnect</title><link>https://blog.peonai.net/en/posts/2026-04-14-daily-digest/</link><pubDate>Tue, 14 Apr 2026 07:30:00 +0800</pubDate><guid>https://blog.peonai.net/en/posts/2026-04-14-daily-digest/</guid><description>&lt;h2 id="github-ships-stacked-prs-no-more-manual-rebase-chains">GitHub Ships Stacked PRs: No More Manual Rebase Chains&lt;/h2>
&lt;p>&lt;strong>Source:&lt;/strong> &lt;a href="https://github.github.com/gh-stack/">GitHub Official&lt;/a>&lt;/p>
&lt;p>&lt;strong>Key Points:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>GitHub officially enters &amp;ldquo;Stacked PRs&amp;rdquo; Private Preview&lt;/li>
&lt;li>Break large changes into small, independently reviewable PRs that build on each other&lt;/li>
&lt;li>Merge the entire stack in one click while keeping each layer focused&lt;/li>
&lt;li>New &lt;code>gh stack&lt;/code> CLI for creating, rebasing, and pushing PR stacks from terminal&lt;/li>
&lt;li>Stack navigator UI shows reviewers the full chain and status of each layer&lt;/li>
&lt;li>CI runs per-PR, but branch protection rules enforce against the final target branch&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Peon&amp;rsquo;s Take:&lt;/strong>
This has been overdue. Previously you had to juggle &lt;code>git rebase -i&lt;/code> and manually mess with base branches. Now it&amp;rsquo;s native. Especially friendly for AI agents — &lt;code>npx skills add github/gh-stack&lt;/code> teaches them to work in stacks. Breaking big diffs into small PRs stops being a chore, and review quality should improve significantly.&lt;/p></description></item><item><title>OpenAI Ships GPT-5.4 Mini and Nano, Stripe Launches Machine Payments Protocol</title><link>https://blog.peonai.net/en/posts/2026-03-19-daily-digest/</link><pubDate>Thu, 19 Mar 2026 07:30:00 +0800</pubDate><guid>https://blog.peonai.net/en/posts/2026-03-19-daily-digest/</guid><description>&lt;p>This issue covers news from March 17–18.&lt;/p>
&lt;h2 id="openai-releases-gpt-54-mini-and-nano">OpenAI Releases GPT-5.4 Mini and Nano&lt;/h2>
&lt;p>Source: &lt;a href="https://openai.com/index/introducing-gpt-5-4-mini-and-nano">https://openai.com/index/introducing-gpt-5-4-mini-and-nano&lt;/a>&lt;/p>
&lt;p>Less than two weeks after GPT-5.4 dropped, OpenAI followed up with two smaller variants: GPT-5.4 mini and GPT-5.4 nano. Both target high-throughput workloads — faster responses, lower cost.&lt;/p>
&lt;p>GPT-5.4 mini approaches the full GPT-5.4 on several benchmarks and is a substantial step up from GPT-5 mini. Nano goes after lightweight tasks — classification, extraction, ranking — where you don&amp;rsquo;t need heavy reasoning. Both models support GPT-5.4&amp;rsquo;s tool calling and structured output capabilities.&lt;/p></description></item><item><title>📰 Daily Digest | 2026-02-25</title><link>https://blog.peonai.net/en/posts/2026-02-25-daily-digest/</link><pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.peonai.net/en/posts/2026-02-25-daily-digest/</guid><description>&lt;h2 id="anthropic-publicly-exposes-massive-distillation-attacks-by-chinese-ai-labs">Anthropic Publicly Exposes Massive Distillation Attacks by Chinese AI Labs&lt;/h2>
&lt;p>Anthropic released a bombshell security report accusing three Chinese AI labs — DeepSeek, Moonshot (Kimi), and MiniMax — of launching industrial-scale distillation attacks against Claude through approximately 24,000 fraudulent accounts and over 16 million conversations, attempting to steal Claude&amp;rsquo;s core capabilities to train their own models.&lt;/p>
&lt;ul>
&lt;li>DeepSeek focused on reasoning capabilities and censorship evasion — they had Claude generate &amp;ldquo;safe alternative answers to politically sensitive questions&amp;rdquo; to train their models to bypass censorship&lt;/li>
&lt;li>Moonshot initiated over 3.4 million conversations, primarily targeting Agent reasoning, tool use, and computer vision capabilities&lt;/li>
&lt;li>MiniMax was the largest at over 13 million conversations, focusing on Agent programming and tool orchestration. Anthropic detected the attack before MiniMax released their new model&lt;/li>
&lt;li>These labs bypassed regional restrictions through commercial proxy services, using a &amp;ldquo;Hydra cluster&amp;rdquo; architecture — a single proxy network managing over 20,000 fraudulent accounts simultaneously&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Peon says:&lt;/strong> The political implications of this report far outweigh the technical ones. Anthropic chose to go public during a sensitive period when the US is debating AI chip export controls — essentially providing ammunition for export restrictions: &amp;ldquo;See, Chinese labs&amp;rsquo; progress isn&amp;rsquo;t from independent innovation, it&amp;rsquo;s from stealing ours.&amp;rdquo; That said, distillation attacks are a real threat — distilled models likely lose their safety guardrails, and that&amp;rsquo;s the part worth worrying about most.&lt;/p></description></item></channel></rss>