Part 5: Collective Memory

This is the fifth of seven articles in the series comparing Nature with AI and how they approach agents.

Part four is accessed here, the full series from here.

The title of this one may lead you to science fiction, since the “hive mind” is a common trope, from old Star Trek’s The Borg to the Expanse protomolecule. It’s good for keeping us unsettled, as alien or insect intelligence is just so… alien.

Where does memory live?

In human brains, it’s electrical and chemical, in neurons, synapses, and triggers. In AI systems, memory lives in vector stores, weight matrices, cache layers, or scratchpads.

But in biological collectives — swarms, colonies, or flocks — memory often has no fixed address. It isn’t stored in the agents, it emerges between them. Huh?

There are also morphic fields, coined in the 1980s by Rupert Sheldrake, a Cambridge professor who veered into parapsychology. He captured it in his book Dogs That Know When Their Owners Are Coming Home, definitely a relatable concept for people with pets.

In this post, we stick to more accepted science and explore how nature distributes memory across systems, and how modern AI is only beginning to catch up.

Memory Without Neurons

No single ant “remembers” where food is. Instead, they lay down pheromone trails as they move. When another ant encounters a stronger trail, it follows it, reinforcing the path.

This is memory but not in anyone’s head. It’s externalised, left behind in the environment. When the food runs out, the trail fades. The system forgets because evaporation outpaced reinforcement. It’s memory as a process instead of storage.

Bees use the famous waggle dance to communicate food locations to hive-mates. The angle and duration of the dance encode direction and distance relative to the sun.

But this information isn’t archived; it doesn’t persist in a ledger or a map. It lives for a few minutes, just long enough for other bees to head for the relayed pollen.

Here, memory is performative. It exists only in transmission, like a whispered message, not a recorded one.

This was a realisation as adaptive-emergent coded “boids” for flocking behaviour and modelled ants and bees communication. We started as an excuse to showcase Holochain, a fully distributed peer-to-peer network, where information is stored on local “chains” connected via a DHT (distributed hash table). Except we came to the realisation that Nature doesn’t really care about the past from an individual’s point of view, except for basic needs like where to eat, or where to avoid getting eaten. It is actually quite a Buddhist view of “now”.

Contrast this with AI Memory

Modern AI agents tend to use:

  • Short-term context (e.g. GPT’s token window)
  • External memory tools (vector databases, scratch files, conversation history)
  • Fine-tuning to encode longer-term adaptations

In multi-agent systems, memory may be shared via logs, files, or centralised stores. But each agent typically relies on access to that store, not participation in it.

If the memory system fails, the agents forget, completely and instantly. You may have run up against the “context window”, the amount of memory (tokens) that an LLM has to reach back on a conversation. Microsoft’s Copilot almost had what we needed, after a few tries it just had to fetch and add one more column to a table, when it unexpectedly presented the first iteration of our design (and was proud we were “done”). After our prompt of “just take the second last attempt and add this”, it said “this time for sure” and presented its first draft again. A few more times, and Copilot just stopped talking. Refreshing the browser and we were back to zero. The lesson in mid-2025 is: keep the convo in your own text file like a game’s ‘restore point’, just in case.

In nature, forgetting is gradual. Redundancy and decay are built-in. The system remembers just long enough, and no longer.

Memory as Ecology

What Nature teaches us is that memory can be:

  • Ephemeral (a signal that fades)
  • Spatial (a trail in the world)
  • Shared (carried by multiple agents, but owned by none)

Collective memory is where the system remembers even though no individual does. It’s a resilience mechanism: you don’t have to back up a beehive. Anyone in IT who has to understand backups and restores from a business continuity perspective knows it is a big deal (I still think The Time Machine on my Mac is its best feature).

Can AI Systems Share Memory Like This?

Most AI agents today don’t co-construct memory, they look it up. To move closer to biological models, we will need:

  • Stigmergic memory — leaving traces in the environment for others to find
  • Embodied memory — where the agent’s state reflects its path
  • Temporal forgetting — where memory decays unless reinforced

This is about imagining memory as a dynamic ecosystem, not a static database. Or perhaps “tuning in” to the memory in the environment, like dogs who know when you are coming home.

Coming Next: Imitating Swarms

In the penultimate Part 6 of this series, we’ll revisit the swarm metaphor introduced in Nature in part two, but from an AI perspective. AI loves to talk about “swarms” of agents, but often these are not swarms at all. They’re tightly orchestrated teams of smart specialists.

We’ll look at how this differs from true emergent systems, and what’s still missing from our current generation of AI collectives.