Weekend Reflections #3 | The Fluency Tax
[Views are my own]
This week, I noticed something in my own habits.
As an Italian living in Berlin and working mostly in English, I now default to English with AI even when it is not the language that feels most natural to me.
That felt worth paying attention to.
I started noticing a small but consistent pattern: when I use AI in English, the system feels slightly better calibrated. Not dramatically. Just enough to matter.
My own language habits are already split. I still prefer watching films in Italian. But I read mostly in English, and I take notes in English too. English is also my working language, so part of this may simply be habit.
Still, that is exactly what makes the shift worth noticing. AI is not entering a neutral linguistic space. It is reinforcing the language I already associate with precision, utility, and speed.
I think of that as a Fluency Tax: the hidden cost you pay when the language that feels most natural to you is not the one the system handles best.
For years, English was simply the language around technology. Documentation, forums, tutorials, and the best rabbit holes were usually in English. If you wanted access, you crossed that bridge.
With AI, the issue feels different. This is not only cultural dominance, where people choose what they read, watch, and follow. It is also architectural dominance, where the system itself is better calibrated to one language than another.
The first is preference. The second is protocol.
And in AI, that distinction matters.
When I want the best result from a system, I still tend to reach for English. Not because it is the language in which I think most deeply. Some ideas still arrive more clearly in Italian than they ever will in English. But English more often gives me the cleanest handoff between intent and output.
So I tested that instinct on myself.
I ran comparable prompts in Italian and English across drafting, reasoning, and translation loops.
I would frame this as observation, not proof, but the pattern felt real. The gap was small, yet consistent. The Italian flow felt slightly less precise.
It was not a translation issue. It was in the loop itself: the prompt arriving with slightly less precision, the output calibrated to a default it understood better, the gap between what I meant and what the system heard. Small. Consistent. Enough to change direction.
And I made a practical choice.
English is becoming my AI working language. Not my thinking language, and not the language I use to process the world. But the one I now reach for when I want the lowest-friction handoff between intent and output.
That is a strange shift to notice in yourself.
You are no longer choosing language only for expression. You are also choosing it for yield.
And once language starts behaving like interface, it starts shaping who moves faster, who gets better results, and who feels at home using these systems in the first place.
That has real consequences inside teams.
The trap is misdiagnosis.
When some people get better results from AI, leaders may assume they are more curious, more capable, or simply more AI-native. Sometimes they are. But sometimes they are just operating closer to the system’s rewarded language. What looks like a motivation gap may partly be a fluency gap.
In my experience, English-language comfort seemed to shape early AI adoption at least as much as technical seniority.
Language is quietly becoming a capability variable.
That matters in global product teams. AI adoption may depend not only on tool access, policy, or training, but also on whether people feel fluent in the language the system rewards.
The gap is narrowing, and languages like Italian are no longer badly served. But the calibration is still uneven. English often remains the path of least resistance, while other languages, especially lower-resource ones, still sit further from the system's center of gravity.
If that is true, then fluency stops being only a communication issue. It becomes a performance variable.
Adoption metrics become harder to read. What looks like uneven curiosity or skill may partly reflect differences in language proximity.
For leaders rolling out AI, that raises a harder question: how much of the adoption gap inside your team is really about skill or motivation, and how much is about who feels fluent in the language the system rewards?