Skip to main content
Blog

The Signals

//

The Signals

Before starting Light Anchor, we spent years deploying AI agents into large enterprises. The pattern was always the same. The agent would do something obviously wrong, we'd dig in expecting a model problem, and it would turn out to be an operations problem. Bad data entered by hand months earlier. Outdated records from a vendor who sent the wrong file. Fields that didn't match because two systems used different naming conventions and nobody had reconciled them.

The fix was always manual. Someone had to look at it, track down the source, get the corrected version, reformat it, and upload it. These were companies with thousands of engineers. The AI worked. The operations underneath it didn't.

After watching this play out the same way across dozens of deployments, we started to notice something that, once seen, is hard to unsee. The bottleneck was never the model. It was always the messy, undocumented, human-maintained operational layer that every business runs on but nobody has bothered to formalize. And it wasn't just the companies we worked with. It seemed to be everywhere.

For every dollar companies spend on software, six dollars goes to the human labor required to operate around it. Entire teams exist not to think or create or make strategic decisions, but to keep things running: processing, reconciling, verifying, coordinating, fixing things when they break, and doing the same thing again the next day. Most people in business know this intuitively but accept it as a cost of doing business, the way people accepted you had to go to a store to buy things before e-commerce.

Most of the current wave of AI is aimed at making this work faster. Copilots and assistants that sit alongside the person doing the job and offer suggestions. That's useful, but it seems like a strange place to stop. Many of these jobs consist entirely of tasks that AI agents can now handle end-to-end, with no human in the loop at all. Making someone 20% faster at a job that could be fully automated is a real product, but it's not the interesting one.

The reason full automation hasn't happened yet is subtle. AI models know what things look like in the abstract, what a purchase order is, how a workflow is supposed to function in theory. But they have no understanding of how any particular business actually operates in practice. They don't know how a specific company handles a vendor exception, or what happens when two systems disagree about the same record, or why one person on the operations team has been running the entire process from a system she built six years ago that nobody else understands.

There's a layer of knowledge underneath all of this that no one has ever tried to systematically capture. We call it the execution layer -- the operational understanding that experienced employees carry but can't quite articulate, that lives in undocumented processes and tribal workarounds, that evaporates every time someone leaves. It's the most valuable territory in enterprise software that nobody is exploring, because the only way to get it is to actually go in and run the operation yourself.

That's what we decided to do. And the interesting thing is what happens as a side effect. Every time an agent takes over an operation, it captures the complete structure of how that business actually functions. Not how it's documented, but how it really works: the actual rules, the real exceptions, the judgment calls that people make without thinking about it. And this knowledge compounds. Each new engagement teaches the system things that make the next one easier. After a while you end up with something surprisingly hard to replicate: an empirical map of how businesses actually operate, built not from theory but from running real operations.

No foundation model has this knowledge because it's never existed in structured form. No SaaS company has collected it because selling software doesn't produce it. The only way to get it is to operate the process. And the more of it an agent accumulates, the more it can take on. The agent that handles routine operations today becomes the agent that handles exceptions tomorrow, and eventually the agent that makes business decisions.

There's a consequence that's hard to avoid. The boundary of what agents can handle keeps expanding as they learn more about how a business works, and there's no reason it stops. Which means you eventually end up with operations that run themselves -- not because someone decided to automate everything, but because the agents just gradually got good enough that the human in the loop became optional, and then unnecessary.

The barrier to getting there isn't model capability or capital. It's the operational understanding required to make agents reliable enough that companies trust them without oversight. That understanding doesn't come from better algorithms or bigger datasets. It comes from doing the work.

It compounds slowly. Then all at once.

# The Signals > Light Anchor Blog > 2026-03-24 > https://lightanchor.ai/blog/the-signals --- Before starting [Light Anchor](/), we spent years deploying AI agents into large enterprises. The pattern was always the same. The agent would do something obviously wrong, we'd dig in expecting a model problem, and it would turn out to be an operations problem. Bad data entered by hand months earlier. Outdated records from a vendor who sent the wrong file. Fields that didn't match because two systems used different naming conventions and nobody had reconciled them. The fix was always manual. Someone had to look at it, track down the source, get the corrected version, reformat it, and upload it. These were companies with thousands of engineers. The AI worked. The operations underneath it didn't. After watching this play out the same way across dozens of deployments, we started to notice something that, once seen, is hard to unsee. The bottleneck was never the model. It was always the messy, undocumented, human-maintained operational layer that every business runs on but nobody has bothered to formalize. And it wasn't just the companies we worked with. It seemed to be everywhere. For every dollar companies spend on software, six dollars goes to the human labor required to operate around it. Entire teams exist not to think or create or make strategic decisions, but to keep things running: processing, reconciling, verifying, coordinating, fixing things when they break, and doing the same thing again the next day. Most people in business know this intuitively but accept it as a cost of doing business, the way people accepted you had to go to a store to buy things before e-commerce. Most of the current wave of AI is aimed at making this work faster. Copilots and assistants that sit alongside the person doing the job and offer suggestions. That's useful, but it seems like a strange place to stop. Many of these jobs consist entirely of tasks that AI agents can now handle end-to-end, with no human in the loop at all. Making someone 20% faster at a job that could be fully automated is a real product, but it's not the interesting one. The reason full automation hasn't happened yet is subtle. AI models know what things look like in the abstract, what a purchase order is, how a workflow is supposed to function in theory. But they have no understanding of how any particular business actually operates in practice. They don't know how a specific company handles a vendor exception, or what happens when two systems disagree about the same record, or why one person on the operations team has been running the entire process from a system she built six years ago that nobody else understands. There's a layer of knowledge underneath all of this that no one has ever tried to systematically capture. We call it the execution layer -- the operational understanding that experienced employees carry but can't quite articulate, that lives in undocumented processes and tribal workarounds, that evaporates every time someone leaves. It's the most valuable territory in enterprise software that nobody is exploring, because the only way to get it is to actually go in and run the operation yourself. That's what we decided to do. And the interesting thing is what happens as a side effect. Every time an agent takes over an operation, it captures the complete structure of how that business actually functions. Not how it's documented, but how it really works: the actual rules, the real exceptions, the judgment calls that people make without thinking about it. And this knowledge compounds. Each new engagement teaches the system things that make the next one easier. After a while you end up with something surprisingly hard to replicate: an empirical map of how businesses actually operate, built not from theory but from running real operations. No foundation model has this knowledge because it's never existed in structured form. No SaaS company has collected it because selling software doesn't produce it. The only way to get it is to operate the process. And the more of it an agent accumulates, the more it can take on. The agent that handles routine operations today becomes the agent that handles exceptions tomorrow, and eventually the agent that makes business decisions. There's a consequence that's hard to avoid. The boundary of what agents can handle keeps expanding as they learn more about how a business works, and there's no reason it stops. Which means you eventually end up with operations that run themselves -- not because someone decided to automate everything, but because the agents just gradually got good enough that the human in the loop became optional, and then unnecessary. The barrier to getting there isn't model capability or capital. It's the operational understanding required to make agents reliable enough that companies trust them without oversight. That understanding doesn't come from better algorithms or bigger datasets. It comes from doing the work. It compounds slowly. Then all at once.