Augmentation vs supervision

It’s 2025, and LLMs (“AI agents”, if you will) are capable of doing some kinds of work with little to no oversight.

Today they are barely capable of doing work that is equivalent to that of, say, a junior software engineer or marketer. But they are often much faster and can work 24/7. It is not unreasonable to expect that they will become even more capable in the coming years.

My feeds are full of people talking about new models and whether or not they live up to some arbitrary standard for “good enough”. Specific arguments and the far future put aside, most of the predictions I’ve seen so far will paint the near future in one of three ways:

  1. AI agents will not replace human work very much, so things will stay the same.
  2. AI agents are capable, but can’t do work independently. We all do work, but we do it super effectively thanks to our AI co-pilots, assistants, etc.
  3. AI agents become super-intelligent and take over all work. Dystopia or utopia ensues, depending on who you ask.

Let’s unpack that middle option. Most people seem to interpret this as a kind of cyborg vision for doing work. We humans are still directly at the controls, but we are stronger and faster at our crafts. In this version of the future, we still use tools that still resemble word processors, IDEs or graphic design tools.

But.. What if we all end up being front-line supervisors instead? This seems much more likely to me, especially as the cost of inference goes down and companies can afford to run many threads of state-of-the-art models 24/7.

In that scenario, the tools will likely look more like Trello, Asana or Slack. If that’s the future we end up in, what does that mean for how work is planned and orchestrated? And what does it mean for the role of humans in knowledge work?

Everyone’s a manager now

Even in a scenario where AI agents can perform only some tasks that are done by humans today, many of us will shift from doing the work to overseeing the work. This is arguably already happening today: Lots of people use ChatGPT or similar products to assist them with work, making those models indirect contributors. We just use them very crudely, and sometimes clandestinely.

If this continues, will there be such a thing as a “pure” individual contributor as we define it today? Or will we all transition to being “player coaches”, doing some tasks ourselves but also managing and supervising a virtual “team” of AI agents?

What if we’ll all just be… managers? Are we ready for that?

Good management is often taught informally, over a long period of time, through mentorship and mimicry. That doesn’t scale very well. And long term, would the job of “AI supervisor” be enjoyable despite the assumed lack of status, compensation and social gratification that traditionally came with management duties? It’s reasonable to assume that some people will be left behind, either due to skill gaps or from deciding that this new type of work is not for them.

There might be mental health challenges for those that commit to a new way of doing their job: Many new managers, myself included, struggled with burnout as they transitioned away from doing direct, tangible work.

Traditional management hierarchies arguably exist partly because of coordination costs and span-of-control limitations. With AI agents, could a single person effectively “manage” dozens or hundreds of them? What will be the economic, political and social effects of a potentially shrinking managerial class?

It’s hard to imagine “knowledge work” disappearing, but most of us might not like what it becomes.

The need for an AI orchestration layer

The more practical question that I’m not hearing much about, if at all, is what all of this means for work management software. If we are all becoming managers, where is all the tooling to help us manage these agents effectively and at scale?

The bigger an organization is, the more tooling is needed to move it forward in unison. When talking about human contributors, this is obvious: Plenty of companies and products make money on the premise that work must be managed, planned and supervised. Jira, Linear, Trello, Asana, Monday.com, GitHub, you name it. But what about managing and planning the work of AI agents?

I feel like the interfaces are mostly there already. I can imagine an MVP where Jira tickets can be assigned to a virtual employee that will do some basic triage, suggest a solution or work on a bug autonomously.

In the medium term, we probably need to invest in the unique things that are likely to set AI agents up for success:

  • Delegation or some way to distinguish what work should be assigned to AI agents, and what degree of freedom they should have in solving a task.
  • Clearly stated and machine-accessible goals, both for context and for agents to be self-starters
  • Quality control, editing and approval. Humans will likely need to be in the loop, either as collaborators or supervisors.
  • Context management. We don’t have infinite context windows yet. Agents need access to relevant information, but also need to know what’s important vs. noise. This is rarely collected today.

Who will build these tools? It will be hard for new players to compete: The Atlassians and Asanas of the world have a 20+ year head start on traditional work management. AI companies have obvious competitive edge here, too.

In the long term… Who knows. If the skills that humans add to any work process becomes more homogenous (because we are all just supervisors), that might open the door for a lot of consolidation in the tooling space. Maybe the AI agents will just orchestrate themselves, and we’ll just have to stand there and watch.

Note: I wrote this as a question asking exercise, not a crystal ball, so take it with a grain of salt. I’m just some guy that can’t shake the feeling that everything about work is about to change. Tell me what you think!