Part 2: Swarms That Don't Think

In part one, we asked what it means to call something an agent.

We contrasted modern AI agents — designed to achieve explicit goals — with biological agents like bees or ants, which follow simple rules without necessarily “understanding” their purpose.

This time, we look more closely at the swarm — a system where no individual has the full picture, and yet intelligent, adaptive behaviour emerges.


Flocking Without Memory

Consider a flock of European starlings (Sturnus vulgaris) sweeping across the sky in a murmuration. Each bird adjusts its position based on the distance and direction of a few nearby neighbours. There’s:

  • No central controller
  • No map of the sky
  • No long-term memory

And yet, the group responds to predators, changes direction fluidly, and avoids collisions.

This is coordination without cognition.


Ants: Instinct + Chemistry = Logistics

Ants are a classic example of biomimetic inspiration. Individually, they’re almost algorithmic in behaviour. But collectively, they:

  • Forage efficiently by forming supply chains
  • Build complex nests
  • Respond to danger and defend territory
  • Divide labour dynamically

How? Through simple, hard-wired rules and stigmergy — a form of communication where the environment (e.g. pheromone trails) becomes the message. Ants lay down pheronomes and others follow with trails reinforced or abandoned. This is analogous to Hebbian theory in learning associations in neural networks: group memory, distributed perception, and self-organisation.

In creating models we found the difference between birds flocking and ant trails profound, and will be the subject of a future article.


Okay, so What Makes a “Swarm”?

  1. No central control
  2. Simple local rules
  3. Limited or no internal memory… it is encoded in the environment itself
  4. Emergent global behaviour
  5. Robustness through redundancy… it is adaptive

If one agent fails, the system continues. If the environment changes, the rules still apply. This is a radically different design philosophy from most AI systems today.


Why This Matters for AI

Modern AI agents — especially LLM-based ones — tend to:

  • Have internal state and memory
  • Use global models or vector stores
  • Operate within a central orchestrator
  • Fail if the architecture breaks or a tool is missing

This is very un-swarm-like. The term “swarm” is now often used in AI to describe multi-agent systems, but in most cases, these agents are powerful LLMs acting under coordination, not emergent behaviour.

We are calling it a swarm, but we are building a chorus of soloists, not a hive. We are still stuck in the paradigm of centralisation. Snap out of it!


Nature’s Design Pattern

  • Simplicity over optimisation
  • Rules over goals - profound.
  • Distributed memory over internal reasoning

Swarm intelligence doesn’t need thinking minds. It needs:

  • Robustness: headless, so no mission control (this concept is worth exploring for distributed identity and unhackable systems).
  • Redundancy: sorry bee 17,362, you can be replaced.
  • Embodiment: strong emphasis on sensorimotor integration. The opposite of “disembodied”.
  • Closed feedback loops: body affects perception, perception affects behaviour.

Coming Next: Evolution vs Instant Learning

In Part 3, we’ll look at the central tension running through this series: How natural agents evolve over generations, while AI agents are trained in hours or minutes. This changes how each system deals with risk, failure, memory, and purpose.

Nature can afford slow progress because it plays a long game. AI races ahead — but at what cost?