Weekend Reflections #5 | The Speed of Doubt: When AI Outruns Trust

Weekend Reflections #5 | The Speed of Doubt: When AI Outruns Trust

[Views are my own]

Last week I ran a competitive analysis that would normally have taken me half a day.

With AI, the first version took ten minutes.

That was not the strange part.

The strange part was that it was good.

I did not discard it. I did not think it was nonsense. But I checked every single point anyway.

By the time I was done, the verification had taken almost as long as doing the work manually.

There were a few misses. Some small mistakes. Nothing material. It was good enough.

And still, I would not have trusted it without going through it myself.

I noticed that reaction because it was not simple scepticism.

Some scepticism is justified. AI can be wrong, shallow, or confidently incomplete, and that instinct has occasionally been right to flag it.

But that was not the interesting part here.

The interesting part was that the doubt remained even after the output had mostly earned confidence.

Psychology has a useful lens for part of this: the effort heuristic.

In 2004, Justin Kruger, one of the psychologists behind the Dunning-Kruger effect, and colleagues ran a simple experiment. They asked people to rate the quality of a poem. The only variable was how long the poet had reportedly spent writing it: four hours or eighteen.

The poem was identical.

The ratings were not.

The version associated with more effort was judged more favourably.

We often use visible effort as a shortcut for quality, especially when quality is hard to assess. Not because we are irrational, but because in many contexts effort used to be a useful signal of care, diligence, and seriousness.

Time was a reasonable proxy for care.

It became a useful shortcut for good reason.

And it does not only apply to art.

Think about the last time a colleague came back with a proposal in an hour when you had expected a week. Think about the discomfort of a decision that arrived too quickly, or a code review that seemed too easy. The output might have been excellent. But something nagged. We have always been suspicious of things that arrive faster than our mental model says they should.

What AI changes is the scale of that gap.

It is not that AI shaves a few hours off a task. It collapses the timeline entirely. And when the time collapses, something stranger happens: the result stops feeling plausible.

Not because it is obviously wrong.

Because it arrived faster than quality is supposed to arrive.

There is nothing to picture. No image of someone reading, cross-referencing, drafting. The result simply arrives.

Without a visible process, the brain fills the gap with the only tool it has left.

Doubt.

I call this The Speed of Doubt.

Not the suspicion itself, which makes sense given the signals we have learned to trust. But the fact that it persists even when we know better. Even after we have checked the output and found it right. Even after we have been wrong about our scepticism enough times to know we will be wrong again.

Some products have responded to this instinct by making effort visible, or at least making it appear visible.

Travel sites have long used progress messages like checking airlines or comparing fares. Sometimes the point is not only computation. It is reassurance. The result feels more credible because the product performs the work in front of us.

That may be effective. But it points in the wrong direction.

The real question is not how to make AI look like it took longer.

It is how to help people build confidence in work that no longer takes the amount of time they expect serious work to take.

That may mean showing what the system read, what it compared, where it was uncertain, and what should still be checked. Not theatre. Legibility.

Because AI can compress production time without compressing accountability time.

A draft may take three minutes.

The decision to use it may still take four hours.

I have been asking myself whether the trust gap recalibrates with exposure.

For some tasks, yes. Where the cost of being wrong is low, the doubt fades quickly. I now reach for AI without hesitation for drafts, summaries, first passes at research.

But for decisions that carry real weight, the suspicion holds. A strategic analysis. An evaluation that will shape something important. The effort heuristic still applies a quiet discount to speed. A residual assumption that difficulty should be visible before the result can be trusted.

It is a bit like flying. Experienced travellers know the statistics. They have flown hundreds of times. And yet turbulence still triggers something, even briefly, even when the rational mind has already moved on. Knowing and trusting are not the same thing. They run on different timelines.

The effort heuristic explains part of the discomfort. But in organisations, the issue is not only perceived quality. It is accountability. The reviewer is not just asking, “Is this good?” They are asking, “Can I defend this decision if it matters later?”

The people most accountable for consequential decisions are often the ones whose judgment was formed in a world where time and quality were tightly linked. Those intuitions were not wrong. They were accurate for the world they were built in.

That world has changed. The heuristic has not.

And those same people are usually the ones reviewing the most consequential outputs. Which means AI's fastest outputs are not only judged by their accuracy. They are judged by whether they look like enough work happened.

I wonder what the right response to that is.

Whether we try to update the heuristic, and train ourselves and our teams to trust what we can verify rather than what looks like it took effort.

Or whether we build AI outputs that are genuinely more legible: brief, honest, showing their work without slowing it down.

Optimising for the appearance of effort rather than the reality of quality is not a design solution. It is the problem restated.

Where are you still judging AI work by how long the work appears to have taken?