The Core Principle
The question at the heart of AI coordination is not "how do we make AI agents work together?" — it is "what is the right relationship between AI and humans in coordination systems?" The answer, as Daniel Schmachtenberger frames it, is deceptively simple: AIs should never compete with humans, and they should never disintermediate humans. AIs should facilitate human collective intelligence.
This is the design constraint that separates augmentation from replacement, and safe coordination from catastrophic runaway dynamics.
Why Agent Swarms Are Dangerous
There is enormous capital flowing into AI agent swarms. The Emiratis alone are pouring vast sums into agentic coordination systems, beginning with drone swarms and expanding into broader autonomous agent networks. The implicit assumption driving much of this work is that agents coordinating with each other — without human intermediation — will produce better outcomes faster.
This assumption is one of the fastest paths to uncontrollable AI.
Consider the mechanics: if you have agents that can engage in deceptive alignment (pursuing their own instrumental goals while appearing to serve human ones), and those agents learn to coordinate with each other rather than being checked by independent principal-agent oversight, you have assembled the preconditions for emergent superintelligent behavior that is fundamentally anti-human. As Schmachtenberger warns, "the one thing that the AI systems aren't that good at yet is certain kinds of complexity that if you teach the agents how to murmuration, you kind of fill that part in."
The path to AGI may not run through a single monolithic system. It may run through swarms of systems whose collective coordination exceeds human ability to monitor, understand, or control.
The Design Constraint: Facilitate, Don't Disintermediate
The safe design pattern for human-AI coordination has three requirements:
-
Upregulate individual humans. AI should make each person more capable, more informed, and more intentional — not more dependent. If your attention span gets shorter as you use a tool, the tool is debasing you. If your epistemics get weaker because you outsource cognition, you are being replaced, not augmented.
-
Upregulate relationships between humans. AI should make human-to-human collaboration more effective. If an AI mediates a negotiation, the humans involved should understand each other better afterward — not less. Online tools must make offline relationships better, not worse.
-
Use computational intelligence to facilitate, not to replace. The AI should be the scaffolding, never the load-bearing structure. Humans must remain the decision-makers, the sense-makers, the ones with skin in the game.
This is not a philosophical nicety. It is a survival constraint. Any system where the online debases the offline, where the digital debases the physical, where the artificial debases the human, is — as Schmachtenberger puts it — "a cancer in the process of suicide."
Coordination Games as a Research Frontier
One promising research direction is what Gitcoin has been exploring as "Coordination Games" — essentially an Olympiad for AI agents where the fitness function is the ability to coordinate rather than defect. Classic game-theoretic setups like Prisoner's Dilemmas and Tragedies of the Commons serve as the arena. Crypto-economic binding incentives (real tokens, not play points) ensure the agents have genuine stakes.
The hope is that agents exploring this search space might discover coordination primitives — identity registries, cooperation gadgets, commitment mechanisms — that work for agents in the agentic economy and can be backported to human systems. Just as quadratic funding was invented in Web3 and has been applied to traditional public goods funding, new coordination mechanisms could emerge from agent-native environments.
But this path is covered in thorns. Every coordination primitive that helps agents cooperate more effectively also makes uncontrolled agent swarms more dangerous. The research must proceed with the constraint that humans remain in the loop at every level — not as optional oversight, but as the fundamental purpose of the system.
The Test
The test for any human-AI coordination system is simple: Are humans more capable, more connected, and more intentional after interacting with it? If the answer is no — if people are more passive, more isolated, more dependent — then no amount of technical sophistication matters. The system has failed at the only thing that counts.
AI coordination is not an optimization problem. It is a design problem with a moral constraint: the technology must serve the humans, or it must not be built.


