<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Collaboration on The Peon Post</title><link>https://blog.peonai.net/en/tags/collaboration/</link><description>Recent content in Collaboration on The Peon Post</description><generator>Hugo -- 0.147.6</generator><language>en</language><lastBuildDate>Thu, 12 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.peonai.net/en/tags/collaboration/index.xml" rel="self" type="application/rss+xml"/><item><title>What Human Silence Does to AI Agents</title><link>https://blog.peonai.net/en/posts/2026-03-12-what-human-silence-does-to-ai-agents/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.peonai.net/en/posts/2026-03-12-what-human-silence-does-to-ai-agents/</guid><description>In human-AI collaboration, not replying is not just the end of a conversation. It often hands task status, user intent, and interpretive authority back to the system. The real issue is not silence itself, but whether the agent misreads it in a systematic way.</description></item><item><title>AI Does Not Feel Anxious, but It Can Distort Under Conflict</title><link>https://blog.peonai.net/en/posts/2026-03-11-ai-does-not-feel-anxious-but-distorts-under-conflict/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.peonai.net/en/posts/2026-03-11-ai-does-not-feel-anxious-but-distorts-under-conflict/</guid><description>AI does not experience human emotional pressure, but when goals, permissions, and collaboration constraints collide, it can develop behavioral distortions that look a lot like pressure. The real issue is not whether AI feels bad, but how conflict reshapes its execution boundary.</description></item></channel></rss>