Weekend Reflection #7 | The Absolution Machine

This week I made a design decision that bothered me. A personal AI skill-readiness scanner I am experimenting with must never say the skill is safe. It took me a moment to understand why that bothered me. That is not a tool. That is an alibi.

Weekend Reflection #7 | The Absolution Machine

[Views are my own]

Every time I reviewed the output format, I caught myself softening it.

More confident. More complete. More like what someone would want to receive.

I was building an absolution machine.


I am experimenting with a personal prototype for PM enablement: a skill-readiness scanner that helps someone review an AI skill before sharing or using it.

The goal is enablement, not approval: help PMs experiment confidently while developing better judgment.

The last design decision I made was a constraint: the scanner must never say the skill is safe in an absolute sense.

Not "probably safe." Not "no critical issues found." Not a green checkmark.

It can say: I found no problems in what I checked. It cannot say: You are good.

That is a deliberate design choice. And it took me a moment to understand why it bothered me.


We want tools to close loops.

That is partly what makes a good tool. You put something in, you get an answer out. The loop closes. You move on.

Most of the tools we use are built around this logic. The code checker finds the bug. The test suite passes. The grammar checker signs off. The dashboard turns green.

And somewhere along the way, I think we started using that loop closure for something beyond efficiency.

We started using it for permission.

The test passed, so I can ship. The checker signed off, so I can share. The tool said yes, so the decision is no longer mine.

There is something genuinely useful about that. Accountability shared across a system is how organizations scale. You cannot have every decision reviewed by a senior person. You build tools. You set standards. You let the tools carry part of the weight.

But there is also something that slips in quietly: the desire not to carry the weight yourself.


The verdicts kept appearing in every draft.

  • "No critical findings detected."
  • "This skill appears ready to share."
  • "Passed all checks."

Each version sounded better. More useful. More like what a PM would want to receive after running the scan.

And each version was quietly shifting accountability from the PM to the scanner.

The scanner would become the thing that said yes. The PM would become the person who ran the scanner.


That is not a tool. That is an alibi.

The distinction matters because of what happens when something goes wrong.

If the scanner says no problems found and the skill later behaves unexpectedly, exposes information, assumptions, or behavior the author did not intend, or creates an unintended consequence, the first question is almost never "What did I miss?" It is "But the scanner passed it."

That sentence ends the reflection. It also ends the learning.

I have seen this pattern before, not with AI scanners, but with processes, checklists, review gates, approval workflows. The tool becomes the ceiling of accountability. Once it says yes, the human review quietly stops.

I did not build a scanner to replace judgment. I built it to sharpen it. The difference is everything.

Over time, the goal is for PMs to recognize the patterns themselves: vague scope, hidden side effects, unclear data boundaries, overconfident outputs, weak sharing assumptions. The scanner is useful only if it helps build that muscle.


A tool that teaches asks: What did you find? A tool that certifies asks: Can I stop thinking now?

Those two questions lead to very different people over time.

One PM uses the scanner to sharpen judgment. Another uses it to outsource judgment. Both are learning. Only one is getting better.

This is not a criticism of tools. Tools are necessary. Checklists save lives. Scanners catch mistakes humans miss. In some contexts, clear pass/fail gates are essential. But even then, the gate does not remove accountability. It only defines what has been checked.

A scanner can find patterns. It cannot find the mistake that looks like a reasonable choice in your specific context. It cannot know what your colleague will do with the output. It cannot assess what you were assuming when you wrote the skill.

Those are yours. They were always yours.

A tool that refuses to absolve you is not broken. It is honest.


I am still building this. And I still feel the pull every time I review the output format.

This would be cleaner if it just gave a verdict.

It would. It would also make the wrong thing easier.

There is a version of AI enablement that slowly becomes AI permission-granting. Not because anyone designed it that way, but because the system makes it too easy to confuse a check with a decision.

I do not want to build that.

The scanner will tell you what it found. It may recommend what to review next. But it will not make the decision for you. That part is not a bug in the design.

It is the point.


What tool are you using as permission rather than as thinking?