In particular areas of human endeavour, precision in language matters, such as contract law.
Sometimes the language is numbers, like in accounting or science.
In technology, language can be monetised quickly as FOMO runs rampant. For example, in 2024 “prompt engineering” rose quickly to become the topic you need to know (and here are some courses you can buy) even though this is what business analysts do for a living.
Language, and English in particular, is constantly reinventing itself, perhaps in high school before finding its way to a dictionary. And like street slang the lifetime of some tech terms can be 3-4 years, as a new batch of bright minds creates a set of jargon around a methodology that looks pretty similar to older practitioners. There is “design thinking” (“I brainstorm with sticky notes”) and LinkedIn “thought leaders” (I still don’t really know what that means but I’ve moved on). Even the term “tech stack” in mid-2025 now represents what tools you use to create your marketing material. I guess.
One term that can be particularly irritating for engineers is the word “engineer” itself. Formerly a recognised profession with certification on par with doctors and lawyers, in IT it has morphed again to become someone who understands the coordination of things in a full web development stack (an architect) and can make it work (a programmer).
In AI, multi-agent frameworks are branded as “swarms” of AI workers — planners, coders, testers, researchers — collaborating to solve complex problems. But as laid out in part two of this series this approach is really creating task forces.
Coordination, Not Emergence
In true swarms, as we’ve seen, behaviour arises from simple rules, local interactions, and distributed memory. Big note: there is no orchestration layer. But in AI swarms, there’s almost always:
- A central orchestrator (“The Planner”)
- Pre-assigned roles (e.g. the coding agent, the critic agent)
- Shared memory that’s explicitly passed between agents
- And sometimes a predefined task tree or prompt script
Let’s get deep for a second
It is actually hard to create a true ‘serverless’ (yet another word that has been commandeered, in this case by AWS and lambda functions) system. This is ‘anarchism’ in its original meaning by Kropotkin: self-organisation without hierarchy. Not chaos but systems of mutual aid, voluntary cooperation, and local rule. Or if you want to go further, Spinzoan pantheism, where the swarm intelligence is not divine but immanent.
Ok, brain starting to hurt, back to the surface.
So Why the Swarm Metaphor?
Because it’s appealing. Because it gestures at decentralisation, collaboration, and emergence. And because it gives us hope that AI agents might someday work together the way nature’s agents do.
But we’re not there yet. Today’s agentic systems simulate coordination, not true self-organisation. To close that gap, we’d need:
- Agents with minimal roles and maximal adaptability
- Decentralised memory
- Environmentally mediated communication
- And the ability to fail safely while still progressing
In other words — systems that behave more like ants, and less like junior engineers awaiting orders.
In the final part of this series, we recap how different natural and artificial intelligence really are, where some leading lights in AI are heading, and what’s next for adaptive-emergent in exploring some billion-year-old design patterns!


