On Friday, April 29, 2026, Curtis Chambers had a flight back to San Francisco at five.

He gave me one hour and forty minutes before the airport.

That was enough time to change how I understand everything I have built.

The short version: A non-technical marketing firm owner built a 29-agent AI orchestration system in 30 days using Claude Code — on top of three years of applied AI operations inside her agency. The seventh employee and first Head of Engineering at Uber, a co-inventor of surge pricing, spent 1 hour and 40 minutes reviewing that system on April 29, 2026. He told her she is a manager of agents, and that the role does not have a name yet because it does not exist at scale. This post is about what happened in that mentorship meeting.

Who Is Curtis Chambers?

Curtis Chambers was the seventh employee and first Head of Engineering at Uber. He was responsible for engineering as Uber scaled its early operations. He is a co-inventor on the patent for Uber's surge pricing — the algorithm that re-priced ride availability in real time based on demand, a system that had not previously existed in on-demand ride services.

He was not just building software. He was building inside a legal vacuum. The legal system at the time only recognized two categories of work — employee or contractor. Uber needed a third paradigm that did not exist yet. So they invented it. The on-demand labor model that later spread to DoorDash, Instacart, and platforms that followed was built during that period. Curtis was part of the engineering team that made it work at scale.

Before Uber, he helped build Expensify. After Uber, he is doing a PhD at Berkeley and UCSF in computational precision health and co-building a startup with his PhD advisor.

He has been my mentor since 2019. He advised and reviewed my prototype before I won first place at the Global Amazon Alexa Skills Challenge in 2021. He has been to PowerFuel Damas events. He has seen me speak. He has watched my entire entrepreneurial journey from the beginning.

On Friday, five years after that Alexa win, I showed him everything I have built since.

What Did I Bring to That Table?

This was the first time I have ever shown the full system to anyone. I needed him to see the whole arc — not just the 29 agents, but where they came from and why I built them.

I told him about 2021. When ChatGPT launched, I was already paying attention in a way most people around me were not. I trained 13 people inside my agency on how to use it — immediately, in the first weeks. Not because I had a plan. Because I could see what it was going to do and I did not want to be behind.

I told him about the Meta AI experiment — the custom AI avatar I built inside Meta AI Studio that generated nearly a million messages. 947,000. That was not a product. That was me testing what happened when you gave people access to a personalized AI presence at scale. The answer was: they came back. Obsessively.

I told him about the 16 custom ChatGPTs I built and put into production inside my agency. Not demos. Not experiments sitting in a folder. Tools my team used. Proposal cycles compressed from 10 days to 3. An AI-native hiring pipeline. Systems that ran whether I was in the room or not.

I told him about my father. About why any of this matters. About what drove me to care about AI before it was a career move or a trend — back when it was just the thing I could not stop thinking about.

And then I showed him what I built in the last 30 days.

What Was on the Table?

I had the architecture map printed and laid flat on the table — because the laptop was too much for a conversation this dense. You cannot have a real conversation about a system while someone is clicking through it. You need to be able to point.

The map showed the full 29-agent orchestration system. Each agent is a persistent AI specialist with its own defined role, its own instruction set, its own domain, and its own memory. They do not overlap. They do not repeat each other's work. They hand off tasks through a central routing hub — agent-pm — the same way a chief of staff routes work through a team.

I walked him through three layers:

  • The orchestration layer. 29 agents in Claude Code — agents for SEO, content, brand strategy, research, quality assurance, operations, video, design, coaching, and more. Each with governance files that define what it can and cannot do. A dashboard. A daily reporting system. Context management across sessions.
  • The project management layer. A system where humans and agents are assigned tasks side by side, in the same dashboard. Agent-pm routes work. Agents report completions. The whole system is observable. I have not seen this built publicly yet — a hybrid team dashboard where AI agents and human owners appear in the same task queue.
  • Three years of receipts. 16 custom GPTs in production. A Meta AI avatar with 947,000 messages. Proposal cycles from 10 days to 3. An AI-native hiring pipeline. The website I built with the system at daniellevantini.com. Applied AI operations running a real agency, not a side project.
The printed 29-agent AI architecture map — the document Danielle brought to the review with Curtis Chambers
The architecture map — 29 agents, printed. This is what I walked in with.

What Did He Say?

Halfway through the conversation, I said something I had been working toward for a long time:

"I consider myself an orchestrator because I am orchestrating all these agents, and I am a specialist in applied operations. In AI, at least in the fields of things I have done. But I am still trying to understand what to call it."

Curtis listened. Then he said this:

"I've seen people build a one-off employee replacement. But not an orchestration layer of operations. That's what you're doing."

— Curtis Chambers, former Head of Engineering at Uber

He kept going. He told me the role I have built is new. Not new like "a new job title." New like: this category of work does not exist at scale yet in business. Most of what people are calling "AI" is a single agent doing a single task. That is not what I built. What I built is an orchestration layer running operations across an entire business — and that is something else.

"You are a manager of agents. There aren't even people."

— Curtis Chambers

I will be sitting with that sentence for a long time.

What Is the Name for What I Do?

For years, I did not have outside language for any of this. I just had the work. I knew what I was doing inside my agency. I could describe the system. I could not name the role.

Now I can.

Applied AI Operations. I am a multi-agent orchestrator running a hybrid AI stack — Claude Code for the orchestration layer, and ChatGPT, Gemini, Seedance, and whatever ranks best in the moment for the work underneath. Models and tools are changing faster than most operators can keep up with. Adapting the stack is the work.

I am not a developer. Not an engineer. Not a vibe coder. An orchestrator who knows the business cold and runs it with agents.

The conversation pointed me toward a direction — coaching, speaking, and Applied AI Operations work for non-technical founders and orchestrators. From someone who has watched my entire arc since 2019, that was the part I will carry the longest.

What Is the Legal Vacuum Parallel?

The conversation went somewhere I did not expect.

Curtis drew the parallel between what is happening in AI right now and what happened in ride-sharing fifteen years ago. AI is operating in the same kind of legal vacuum that ride-sharing operated in when Uber launched. The law had not caught up. The categories did not exist. The systems were real but the language to describe them — legally, professionally, commercially — was being invented in real time.

He watched that vacuum close from the inside. He knows what it looks like at the beginning, before the definitions arrive. My read of the conversation: we are there again. The role of manager of agents does not have a job description, a salary band, a LinkedIn category, or a university program yet. That does not mean it is not real. It means it is early.

Someone who was inside the last paradigm shift — who helped build the engineering foundation of on-demand platforms — is exactly the kind of mentor you want when the next one is the paradigm you are building inside.

Some of what he told me, I cannot share publicly. That is the nature of frontier intelligence — you protect the room. But the conversation opened possibilities I had not considered — around deployment models, around how to think about the business and curriculum architecture for teaching this to non-technical orchestrators.

What Does This Mean If You Are Not Technical?

Here is the thing about Curtis's validation that matters most to non-technical founders reading this:

He was not impressed because I wrote code. I did not write code. He was not impressed because I have a computer science background. I do not have one.

He was impressed because I understood what I was building well enough to design it. Because I had three years of operational receipts, not just a prototype. Because I knew the business problem before I picked the tool. Because I wrote the governance — the rules, the constraints, the handoffs — that made 29 agents behave like a system instead of a pile of prompts.

The orchestration layer of AI operations is not going to be built by engineers who do not understand the business. It is going to be built by people who understand the business deeply enough to translate it into systems — whether they write the code themselves or not.

That is the opening. That is where non-technical founders can compete at the frontier. Not by learning to code. By learning to orchestrate.


Thank you, Curtis. For 2019. For 2021. For Friday.

I've been here. You just didn't know.

Frequently Asked Questions

What is a manager of agents?

A manager of agents is a role that does not yet have a formal name or job description. It describes someone who designs, governs, and runs a system of AI agents that execute tasks across a business — not one agent doing one task, but an orchestration layer where multiple agents handle different functions in coordination. The person in this role is not a developer. They are the architect of the system's logic, the author of its rules, and the decision-maker over what agents do.

What is applied AI operations?

Applied AI operations is the practice of running a business using AI agents embedded into real operational workflows — not as experiments, but as the infrastructure the business depends on. It is distinct from using AI as a tool. An applied AI operations practitioner designs the system, writes the governance, manages the agents, and adapts the stack as models and capabilities change.

How is AI orchestration different from using ChatGPT?

ChatGPT is a conversation tool. You ask it something, it responds, you decide what to do with the answer. AI orchestration is a system of agents that execute tasks, pass work between themselves, report on completions, and run operations without you being in the loop for every action. The difference is between using a tool and building an operating system.

Can a non-technical person build an AI orchestration system?

Yes. Danielle Vantini built one with 29 agents without a technical background. The entry point is not coding ability. It is clarity about the business operations you want to automate, the discipline to write rules and governance for the agents, and the judgment to understand what each agent is doing and why. Technical knowledge helps. It is not the prerequisite.

What did Curtis Chambers say about Danielle's AI system?

"I've seen people build a one-off employee replacement. But not an orchestration layer of operations. That's what you're doing." He also said: "You are a manager of agents. There aren't even people." He told her the role does not have a name yet because it does not exist at scale — and that the direction she is taking, coaching and speaking to non-technical founders, is the right one.

What is Claude Code and how does it relate to AI orchestration?

Claude Code is an AI tool made by Anthropic that runs directly on your computer and can read files, write code, and take actions in your system. For orchestration purposes, it serves as the environment where agents live, receive instructions, and execute work. It is not the only layer — other models handle specific tasks underneath — but it is where the orchestration system is designed and governed. Read the full guide: What Is Claude Code — Read This First.

What is AI orchestration?

AI orchestration is the practice of designing and running a coordinated system of multiple AI agents — each with a defined role — that work together to execute business operations. Unlike using a single AI tool for one-off tasks, orchestration means the agents operate in sequence or in parallel, hand work between themselves, and produce outputs the business depends on daily. The person who designs and governs this system is called a manager of agents — a role that does not yet have a formal job title.

What is supervision of agents?

Supervision of agents is the human governance layer in an AI orchestration system. It involves setting the rules each agent follows, reviewing outputs for quality and accuracy, escalating decisions that require human judgment, and updating agent behavior as the business evolves. It is not passive monitoring — it is an active operational role. Danielle Vantini's 29-agent system runs with her as the supervising layer, adjusting agent instructions across content, strategy, client work, and business operations daily.

What is a multi-agent orchestration system?

A multi-agent orchestration system is a structured network of AI agents — each specialized for a specific function — that operate together as a unified business operating system. In Danielle's case: 29 agents covering content production, SEO, brand strategy, project management, research, client work, and operations. Each agent has its own rules, memory, and task queue. The orchestration layer routes work between them, tracks completion, and surfaces decisions that require human input. This is what Curtis Chambers called "an orchestration layer of operations" — distinct from building a single AI tool.

What is the difference between a single AI agent and a multi-agent system?

A single AI agent handles one category of tasks — one prompt, one response, one output. A multi-agent system is a coordinated network where agents specialize by function, hand work between each other, and collectively run operations that no single agent could manage alone. The difference is between a contractor and a company. A single agent is a skilled individual. A multi-agent orchestration system is the organizational infrastructure — with roles, handoffs, governance, and the ability to run in parallel across the entire business.

Danielle Vantini
Danielle Vantini Applied AI Operations · Multi-Agent Orchestrator · AI Educator

Danielle Vantini is a marketing firm operator and AI educator who built a 29-agent AI orchestration system without a technical background. First place, Global Amazon Alexa Skills Challenge 2021. San Diego Magazine Woman of the Year 2023. UN Women's Empowerment Principles signatory. She teaches non-technical founders how to run their businesses with AI agents.