Your Brain Is a Judgment Machine Now
AI didn’t eliminate the hard part of knowledge work. It compressed it into every minute of the day.
The dominant narrative about AI and work goes like this: AI handles the production, you handle the thinking, everyone goes home early. It is a clean story. It is also wrong in a way that matters.
A principal engineer at a large telecom posted on Reddit this week about being all-in on agentic coding for two years and thinking about quitting software engineering entirely. Not because the tools don’t work. Because they work too well. The line that stopped me: “The cost of writing code in effort / time was a throttling middleware.”
That phrase deserves to sit for a second. Writing code used to be slow enough that your brain could keep up. The effort of typing, compiling, debugging — all of that created natural pace. You had time to sense when a pattern was wrong. Time to think through the shape of a class or the implications of an architectural choice. The slowness wasn’t a bug. It was cognitive infrastructure.
Now that infrastructure is gone. And this engineer — 13 years of experience, principal level — reports making ten whiteboard-level architectural decisions before his second cup of coffee. Decisions that used to happen once a sprint, maybe twice, gated by the slow, expensive process of actually building things. The dam broke. The decisions didn’t disappear. They accelerated.
The production layer was never the bottleneck you thought it was
When people talk about AI making work faster, they’re usually describing the production layer — the part where raw effort turns into output. Drafting, coding, formatting, researching. And yes, AI compresses that layer dramatically. What nobody accounted for is what happens to the other layer — the judgment layer — when production speeds up by an order of magnitude.
Every piece of AI-generated output requires evaluation. Is this right? Is this good enough? Does this fit the architecture? Does this solve the actual problem or just the surface symptom? Those questions existed before AI, but they arrived at a pace your brain could absorb. Production time was thinking time. The gap between “I need this” and “here it is” gave your mind room to prepare for the decision.
That gap is gone. And the result is not faster work. It’s faster judgment demands on a brain that hasn’t changed speed.
A Boston Consulting Group study published in Harvard Business Review in March 2026 put a name to this: “AI brain fry.” They surveyed 1,488 full-time U.S. workers and found that high AI oversight — the kind that requires reading, interpreting, and evaluating AI output — was associated with 14% more mental effort, 12% greater mental fatigue, and 19% greater information overload. Workers described a fog or buzzing that forced them to physically step away from their screens. One of the study’s authors told Fortune the pattern was consistent: people were getting more done but hitting the limits of their cognitive capacity because there were simply too many decisions to make.
An eight-month study of a 200-person tech firm, led by researchers at UC Berkeley, found the same dynamic from a different angle. AI wasn’t reducing work. It was intensifying it. Employees processed more information, made more decisions, and experienced more burnout — not less — as AI adoption increased.
Decision fatigue is not a new concept. The delivery mechanism is.
Psychologists have studied decision fatigue for decades. The core finding is straightforward: the quality of your decisions degrades as you make more of them. Roy Baumeister’s ego depletion research established that decision-making draws from a finite cognitive resource. Make enough decisions and you start defaulting to heuristics, avoiding trade-offs, or simply deferring. The average American adult reportedly makes around 35,000 decisions a day. Most of those are trivial. The ones that matter are the ones that require actual evaluation.
What AI does is change the ratio. It doesn’t increase the total number of decisions. It increases the density of consequential ones. When production was slow, your day was a mix of low-stakes mechanical work and occasional high-stakes judgment calls. The mechanical work gave your brain recovery time between the hard decisions. It was boring, but it was load-bearing.
Remove the mechanical work and what’s left is a continuous stream of judgment. Architecture choices. Quality assessments. Risk evaluations. Strategic trade-offs. All day. No recovery intervals. The Reddit poster described it precisely: running ten whiteboard-level decisions before morning coffee, decisions that used to be spaced across a sprint. His brain isn’t slower than it was. The demand on it is faster.
We don’t understand the cost of running a judgment machine all day
This is the part almost nobody is talking about. We’ve spent two years celebrating AI’s ability to remove the production burden. We have not spent two minutes thinking about what happens to the human on the other side when the production burden was also a cognitive pacing mechanism.
The Reddit engineer said something else that stuck: “I feel like for the devs that have survived layoff rounds, AI has raised the bar of required skills, not lowered it.” That maps directly to the Jevons Paradox applied to AI — as AI efficiency increases, the demand for human capability doesn’t decrease. It increases. The skills that matter shift upward. The judgment, the architectural thinking, the ability to evaluate quality at speed — those become the job. And the job becomes relentlessly, uninterruptedly hard.
This isn’t a coding problem. It’s a knowledge work problem. Every profession that adopts AI tools effectively will hit this same wall. Lawyers reviewing AI-drafted contracts. Financial analysts evaluating AI-generated models. Marketers assessing AI-produced campaigns. The production layer compresses. The judgment layer concentrates. And the person in the middle has to run their brain at a sustained intensity that the old workflow never required.
We don’t have infrastructure for this yet. We don’t have pacing strategies. We don’t have cognitive load frameworks adapted for AI-augmented work. We don’t even have language for the problem — which is why “brain fry” and “throttling middleware” resonate so immediately. People recognize the feeling before anyone names it.
The work behind the work just got more urgent
The conventional response to this problem will be training. Run a workshop on managing AI output. Distribute a tip sheet on decision prioritization. That approach will fail for the same reason it always fails — it treats the symptom without touching the structure.
The actual work is harder than a workshop. It’s developing the ability to see your own workflow clearly enough to know which judgments matter and which don’t. To calibrate your trust in AI output so you’re not re-evaluating everything at full intensity. To build the signal discrimination that lets you spot the 5% of output that needs real attention and let the rest move.
That is not a training problem. That is a capability development problem. And it’s one that gets more urgent, not less, as the tools get faster.
Your brain was always a judgment machine. AI just made it the only machine that matters.
If this resonated, subscribe to get the next one directly. I write about the operational reality of AI adoption — what it actually looks like when the hype fades and the work begins.


