Part 1: What Do We Mean By 'Agent'?

In 2025 the term “agent” has been dusted off and it is seeing another time in the sun.

My first experience was with printer drivers… remember printers? A printer driver did its thing in the background, with knowledge of the particular printer you were attempting to use. We now talk about “AI agents” that browse the web, write code, and schedule meetings. But in nature, the word agent has always meant something quite different.

This article kicks off a short series exploring the contrasts between biomimetic agents — inspired by ants, bees, birds, and brains — and modern AI agents. Both are “agentic,” but the similarities are more metaphorical than real.


The AI View: Goal-Oriented Tools

In AI systems, an agent is usually defined as:

An entity that perceives its environment and takes actions to maximise its goal. – Russell & Norvig, 4th ed (2020), Chapter 2 “Artificial Intelligence: A Modern Approach (AIMA)”

This is standard reinforcement learning language. The agent observes a state, chooses an action, gets a reward, and updates its policy. More recently, large language models have been placed into agentic frameworks where they can browse, call APIs, or coordinate with other agents.

These agents are powerful and highly optimised, but they are also:

  • Designed with fixed goals
  • Deployed inside centralised orchestration layers
  • Erased or reset when they fail

They are intelligent but ephemeral, brittle, and disembodied.


The Biological View: Instinctive Units in a Swarm

Contrast that with agents in nature. A bee, for example, is an agent. But it doesn’t have a goal function or training loop. It operates through:

  • Simple instincts (follow light, avoid predators, seek nectar)
  • Local rules (e.g., follow the waggle dance)
  • Embodied perception (sun angle, vibration, scent)

There’s no hard-coded “purpose” like “maximise nectar yield.” And yet, from this simplicity emerges a complex system: pollination, hive construction, adaptive colony behaviour.

Biological agents don’t need to be smart individually. The intelligence is in the swarm — the colony, the flock, the network. It is emergent behaviour.


A Key Distinction: Purpose vs Emergence

This is the first critical divergence in our series:

  • AI agents are usually goal-seeking, often under human design.

  • Biological agents are often rule-following, producing behavior that emerges collectively.

  • Can we design AI agents with no explicit goal, only rules?

  • Would that make them more robust — or simply unpredictable?

  • Are we trying to make individual geniuses, when nature prefers collaborative simplicity?


What’s Next?

In part two, we’ll look more deeply at natural swarms: how flocks, ant colonies, and bacterial mats create intelligent outcomes without central planning, learning loops, or memory in the usual sense.

We’ll also begin setting up the tension between evolution vs instant learning — a theme that will echo through this series and point toward a more nature-aligned AI.