I have heard this a few too many times recently: “We can’t build an AI agent yet. Our knowledge base / data catalog / … isn’t ready.”

These and other variations of this sentence stop many AI projects before they even begin. Support agents don’t materialize because the documentation isn’t perfect. AI analysts never appear because the data catalog isn’t complete. Before an agent can be productive, we assume it needs access to a perfectly curated set of tools.

The assumption behind all of this is simple: agents can only be as good as the information they start with. But there is another way to look at this: agents become better as we help them get access to better information.

It’s a small shift in perspective but it changes fundamentally how AI projects begin. We don’t need a perfectly curated environment before starting. We can begin with an incomplete agent and let it improve through interaction with the people who already understand the system.

In the early stages, the agent’s most important asset is the human expert.

Take the example of an AI analyst. Early on, the agent will reach out to the data team frequently, often with simple questions: Where can I find this dataset? Which table contains this metric? What system owns this information?

Those questions might seem trivial, but each answer helps the agent understand the organization’s data landscape a little better. Over time, the agent is not just answering questions — it is implicitly building the data catalog that people thought needed to exist beforehand.

The same pattern appears elsewhere. A support agent gradually builds its knowledge base from resolved cases and expert input. An operations agent learns which tools exist and how they can be used. The environment becomes better structured because the agent is working within it.

Instead of requiring perfect structure upfront, the agent helps create that structure over time.

This perspective also aligns with how many successful AI rollouts are described: organizations move from managing people to managing agents.

In the beginning, teams like the data group still answer many questions. But instead of responding directly to management, they increasingly help the agents access the right information so the agents can answer those questions themselves.

And as the agents become more capable, the role of the human expert evolves again. The data team spends less time responding to “Where is this information?” and more time working behind the scenes — monitoring agents, improving data access, and making sure the system continues to produce better and better answers. With that, the team also continues to ensure that the answers of the AI analyst can be trusted.

The lesson is simple: the barrier to starting AI projects is often lower than we think. We don’t need a perfectly curated knowledge base, a flawless data catalog, or a fully documented toolset before we begin.

Agents shouldn’t have to wait for a perfectly organized environment — they should help create it.