#49 Are Role-Based Agents the “Faster Horses” of AI?
“If I had asked people what they wanted, they would have said faster horses.”
Henry Ford is often credited with that line. He probably didn’t say it. But the story survives because it captures something true. People can usually describe their pain, but they usually don’t have the language, or the imagination, to come up with a solution that removes it.
And right now, in AI (or rather, in LLMs), we may be doing something very similar.
Role-based agents (aka multi-agent systems, agentic workflows, etc.) are popular. Build a virtual team with roles like CFO, CMO, designer, developer, product manager. A “team”. Work gets “delegated”. The agents “collaborate”. Etc.
It looks sensible. It also looks… suspiciously familiar.
No matter what we call it, we’re recreating our old division of labour inside the machine. It’s an AI org chart. We’re taking a new capability and wrapping it in one of our oldest structures.
Are we building faster horses?
Role-based setups can of course be very useful. They can create checks and balances, force different perspectives, catch mistakes, etc.
A simple example from software development is separating “build” from “break”. One agent writes the code, another tries to break it, and they iterate until it stops breaking.
What we’re really doing here isn’t “staffing”, even though we may talk about it in these terms. It’s designing tension into the process. Competing activities. Conflicting incentives. A structured way to surface errors before users do.
The question isn’t whether role-based agents work. They do. The question is whether the AI org chart is the best shape for this capability, or whether it’s just the easiest story we have for how work gets done.
Mental models help. And they also trap us.
When something new comes along, we reach for something familiar to make sense of it. That’s what mental models do. They reduce complexity and give us a common language.
The danger is that we don’t just borrow the language, we also borrow the limitations.
We’re used to splitting work across people because people are limited. Limited attention, limited knowledge or skillset, limited time, limited context, limited cognitive bandwidth. So we specialize. We divide labour. We build org charts, and we coordinate.
But an LLM isn’t a person. It doesn’t get tired, and it doesn’t need a title.
So why are we so eager to recreate our old structure inside the machine?
Is the org chart baked in?
These models are trained on huge amounts of text, much of it saturated with our institutions. Companies, roles, hierarchies, job titles, departments, meetings, approvals, memos.
In this way, we’re not only borrowing a mental model. We might also be reinforcing an entire management culture.
Efficiency. Coordination. Reporting lines. Busywork disguised as process. The idea that complex work must be sliced into functions and routed through boxes. The belief that legitimacy comes from sounding like a department.
If the model’s “native language” of work is shaped by those patterns, then role-based agents might not just feel intuitive to us. They might also be the most natural output for the model, because that’s what “real work” looked like in the material it was trained on.
So are we stuck with the faster-horses analogy not only because it’s what we can grasp as human beings… but also because it’s what these models have “learned” as normal?

What would “cars” look like?
If we stop treating this like a staffing problem, the framing changes.
Instead of starting with an AI org chart, we can ask what outcome we’re trying to create in this specific setting. What does success look like in the real world? What needs to be true for the outcome to be valuable? What do we need to avoid? What signals would tell us we’re wrong?
That’s more of an impact model than an org chart. Start from the desired effect, then work backwards to the constraints, checks, and feedback loops that make that effect likely.
And from there, you might end up with setups that don’t resemble teams at all: structured passes, self-critique loops, verification-first workflows, uncertainty-aware processes, all built around decisions and evidence rather than roles and titles.
I don’t know.
But as I keep seeing these “AI teams”, especially when they replicate leadership teams, I can’t shake the feeling that we’re going about this the wrong way.
Maybe this is simply a limitation of the current technology. If so, we’re stuck, at least for now, with the management philosophy and structures of the past 100 years. Our hierarchies. Our bureaucracy. Our habit of explaining work through roles. All so deeply ingrained in the technology that we might not be able to get past it.
Or maybe role-based agents are scaffolding. Useful right now for sure. But temporary. A stepping stone.
Or maybe I’m just overthinking it. Faster horses might be fine right now.
At least until someone builds a car.
A special THANK YOU to Afra Noubarzadeh for the illustration accompanying this week’s newsletter.