Part 4: Purpose vs Emergence

In previous articles we defined terms like agent and swarms, and looked at a major difference between natural and artificial adaptation, time.


Why does an agent do what it does?

[Spoiler alert: this is not the Big Why, this is behavioural why. Maybe we’ll tackle the larger issue in a future article.] 😉

In AI, this is often easy to answer: it was given a goal. A chatbot is told to answer helpfully, or an autonomous vehicle is trained to minimise collisions. Basically, a reinforcement learning agent learns to maximise a score.

But in nature, the answer isn’t so clean. Why does a bird migrate? Why does a bee dance? Why does a slime mould explore a maze? These aren’t goals handed down from above, they are behaviours that emerge from local rules, genetic hardwiring, and environmental feedback — sometimes shaped by evolution, sometimes by sheer accident.


The AI Agent: Goals By Design

Most AI systems today are teleological — that is, their behaviour is defined by a goal.

A reinforcement learner is designed to maximise a reward function. A GPT agent might be instructed to summarise text or plan a meeting. Even in multi-agent systems, coordination is usually explicit: one agent delegates, others execute. Our machines are still modelled after our hierarchical / centralised corporate society we have really accelerated over the past 80 years.


The Biological Agent: Goals by Emergence

Now consider the bee again. It doesn’t have a strategic plan to optimise hive yield. It follows chemical cues, dances, sunlight angles, and instincts shaped by millions of years of evolution. Yet somehow, the colony allocates resources, shifts behaviour seasonally, and adapts to threats.

Where’s the goal? It’s not in any individual. It’s not even always in the species. It’s in the collective interaction — the dance between agent, environment, and time. Professor Dawkins (see book and movie list in part 3) says it is the selfish gene in the DNA itself that dictates direction, and everything life does, including falling in love, is to perpetuate a gene’s makeup to “win”. Not the most romantic of academics.

The key is that Nature builds purpose from the bottom up, not the top down. This allows almost limitless scaleability, not the usual API bottlenecks that spring up in computer architectures requiring ever larger data centres.


Purpose Without a Planner

I was proud to work in the startup Redgrid over a period of four years from 2018, where we applied distributed network, bottom-up techniques to solve electrical grid brownout and blackout events. We also participated in the state of Victoria’s Neighbourhood Battery Initiative, and used cellular automata from the ancient Conway's Game of Life to literally “keep the lights on” during extreme heatwaves (we also had a few groundbreaking directions in smart homes; I really believe we were about five years too early before we ran out of runway).

The Game of Life demo was shown to Telstra's IoT group, and to China Light and Power, the Hong Kong energy company. What I found fascinating was these groups “got it”; not from a business pitch but each individual intuitively in their gut, even in one of Asia’s most urban environments. It showed me we still have some affinity to Nature, even if it is in a place we never visit, hours of driving away.

Our CEO Adam was looking for something to attract investors, and asked if we had IP that was patentable. We were combining existing tech in interesting ways, but really it was our methodology that stood out; letting dumb edge devices solve a problem through emergence. Not really patentable since it had been in “the public domain” for 3.8 billion years. And how do you predictably design using it??

This approach has no architect or mission statement — but the system still works.

  • Local interactions give rise to meaningful global patterns
  • No one agent understands the purpose — but it still exists
  • Failure and adaptation shape future behaviours, slowly but cumulatively

This is very different from AI today. Our systems do what we tell them to — even if that leads to side effects we didn’t predict. They are intentional but not aware, powerful but narrow.


Can Goals Emerge in AI?

Some researchers are beginning to ask: could we let purpose emerge in AI agents?

Could we build systems that explore, interact, and adapt without a fixed goal? Could agent “desires” arise from interaction patterns — not instruction?

This would mark a profound shift in how we design intelligence.

Instead of optimising, we’d be cultivating — seeding environments, constraints, and feedback, and watching what forms. Imagine fractal building architectures.

It’s slower, more uncertain, and less controllable — but maybe, ultimately, more alive.


Coming Next: Collective Memory

In Part 5, we’ll look at how memory works in distributed systems — from pheromone trails and waggle dances to vector databases and shared scratchpads. Who remembers what in a system without a mind?

And can memory be a property of the group, not the node? Generational, or even morphic?