Vivek Haldar

AI is a Mirror

[As this post was sitting in my drafts, Ethan Mollick published a deeply insightful post making a similar point. Go read that.]

I’ve talked to a lot of companies about building AI agents, and I’ve noticed a strange pattern. You sit down with them, ready to talk workflows, pipelines, and agents, and you ask the simple question: “What do you want to automate?”

And they don’t know.

AI acts as a mirror. You bring it in, hoping it will solve your problems, but it just stands there and reflects your own organization back at you, asking: “So, what do you actually do here? What is your process? What is your workflow?”

For a shocking number of companies, the answer is a shrug.

The company that has a clear, pre-defined workflow that is even ready to be considered for agentification is extremely rare. They know they have inefficiencies. They feel the pain of manual, repetitive tasks. But they can’t articulate the precise sequence of steps, the decision points, and the logic that governs the work.

The initial enthusiasm for “AI transformation” quickly becomes a painful, soul-searching exercise in process documentation. The AI doesn’t tell them what to do; it forces them to figure it out for themselves.

The Second Battle: Escaping the “Vibes” Trap

Let’s say you get past that first hurdle. After weeks of dialogue, whiteboarding, and untangling departmental spaghetti, you finally pin down a workflow. You build an agent. It starts doing the thing.

Now comes the second battle: Is it any good?

The default metric for success in most organizations is “vibes.” Does it feel right? Does the output seem okay? This is a terrible way to build an engineered system, which is what an AI agent is. “Good enough” isn’t a spec.

This is where the mirror shows its second reflection. Not only do you have to define the process, but you also have to define what a successful outcome looks like. You need a rubric. You need properties of a good output. You need a benchmark.

Without a clear definition of “good,” you’re just tweaking prompts and hoping for the best. You can’t measure progress. You can’t have a meaningful conversation about improving the agent’s performance. You’re stuck in a subjective loop of feedback based on gut feelings.

To truly make the most of AI, organizations first have to look within.