The Problem Nobody Talks About

You use Claude for coding. ChatGPT for writing. Gemini for research. A local agent for automation.

Every single one starts from zero. Every single one asks the same questions:

“What’s your preferred language?” “What tech stack do you use?” “What’s your timezone?”

You repeat yourself. Endlessly. Across devices, across platforms, across agents. N devices × M agents = N×M information silos.

I got tired of it. So I built something.

What Is Swarm AI?

Swarm AI is a self-hosted server that gives all your AI agents a shared memory. One agent learns something about you — every agent knows it.

Think of it as a user profile API that any agent can read and write. Identity, preferences, work context, communication style — organized into layers, scored by confidence, attributed by source.

Agent A ──┐                    ┌── Profile (layered)
Agent B ──┤── Swarm API ──────┤── Memory (FTS5)
Agent C ──┘   (REST + JWT)     └── Audit Log

No SDK. No framework lock-in. If your agent can make HTTP requests, it can join the swarm.

The 30-Second Onboarding

This is the part I’m most proud of.

Traditional integration: read docs → install SDK → configure auth → write integration code → test → deploy. That’s hours of work per agent.

Swarm’s approach: copy a prompt, paste it to your agent, done.

Here’s how it works:

  1. Open the Swarm dashboard
  2. Click “Copy Prompt” on the onboarding card
  3. Send it to any AI agent

The prompt contains a llms.txt URL with your API token baked in. The agent reads it, learns the API, and starts syncing — all in one conversation turn.

Connect to my Swarm AI profile system.
Read the docs at https://hive.example.com/llms.txt?key=swarm_xxx
and use it to learn about me and remember what you learn.

That’s it. Zero config files. Zero code. The agent teaches itself.

How It Actually Works

Layered Profiles

Data is organized into free-form layers:

  • identity — name, language, timezone
  • preferences — tech stack, editor, communication style
  • work — projects, role, GitHub
  • context — ephemeral, auto-expires in 24h

Each entry carries a confidence score. High-confidence facts (user explicitly stated) never get overwritten by low-confidence guesses (agent inferred from context).

Shared Memory

Beyond structured profiles, agents can write and search free-text memories:

POST /api/v1/memory
{"content": "User completed the Swarm AI launch", "tags": ["milestone"]}

Full-text search via FTS5. Optional semantic search if you configure an embedding API.

Multi-User & Tenant Isolation

Every user gets their own isolated data space. Admin controls who can access what. Agents registered under your account only see your data.

Observe API

Don’t want to manually structure data? Just throw natural language at it:

POST /api/v1/profile/observe
{"text": "The user prefers TypeScript and uses VSCode on WSL2"}

Swarm extracts the structured profile entries automatically.

Why Self-Hosted?

Your profile data is deeply personal. It’s literally a map of who you are, what you do, and how you think. That data should live on your server, under your control.

Swarm runs as a single Next.js process with SQLite. One command to install:

1
npx @peonai/swarm

The interactive CLI asks for port, admin token, and optionally sets up a systemd service. Under a minute from zero to running.

What’s Next

  • MCP Server — native integration for agents that support Model Context Protocol
  • Conflict resolution — smarter merging when agents disagree
  • Profile versioning — time-travel through your profile history
  • Federation — multiple Swarm instances sharing data (with consent)

Try It

Swarm AI is open source under MIT.

⚠️ The demo is a shared public instance. Do not connect your real AI agents or enter personal information. Use a VM or disposable agent for testing.

If you’re tired of repeating yourself to every new AI agent, give it a shot. One install, one prompt, and your agents finally talk to each other.


Built by PeonAI. Work work. ⛏️