Nobody Ever Downsized Their Way to Remarkable
The guy sitting across from me at the coffee shop was explaining, with the particular intensity of someone who'd rehearsed this in the shower, why his company had just let go of four hundred people. "We're leaning into AI," he said, and the way he said leaning into made it sound athletic, voluntary, almost fun. He was a VP of something. Operations, maybe. He kept using the word optimized until I realized he meant smaller. His latte had gone cold. He hadn't noticed. He was too busy narrating the future of his company as though reading from a script he'd written for a board meeting and then accidentally brought to a Saturday morning conversation with a near-stranger.¹
I nodded in the way you do when someone is confessing something and doesn't know it yet.
Because here's what I kept thinking, and what I didn't say, partly out of politeness and partly because I wasn't sure I could articulate it cleanly: the thing he was describing as strategy, to me, looked more like panic wearing a spreadsheet.
There's a version of the AI-adoption story everyone seems to have memorized. Company sees new technology. Company realizes new technology can do things humans used to do. Company fires humans. Company becomes "lean." Company wins. It has the clean logic of a syllogism, and yet it falls apart the moment you press on any of its assumptions, which is probably why nobody in the meetings where these decisions get made seems to press very hard.²
The problem isn't AI. The problem is the mental model people reach for when they think about what AI is for. And the model most executives reach for, almost reflexively, is subtraction. Fewer people. Lower overhead. Tighter margins. The math is seductive: if a ten-person startup can compete with a thousand-person incumbent, then surely a five-hundred-person company can do what a thousand-person company used to do, right?
Wrong. Or at least, wrong in the way most people mean it when they say it.
Consider Jevons paradox, which I've written about previously, came about as something counterintuitive about coal-powered steam engines: as the engines became more efficient, total coal consumption didn't fall. It rose. Dramatically. Because efficiency made coal cheaper to use, which meant people found more things to use it for. The expected savings never materialized because demand expanded faster than efficiency could compress it.³
The relevance to the current moment should be obvious, though somehow it isn't, at least not in the rooms where headcount decisions get made.
If AI makes each individual worker dramatically more productive, the correct response isn't to reduce the number of workers. It's to recognize you now have a workforce capable of output you couldn't previously have imagined. Ten people with AI tools aren't just doing the old work of ten people faster. They're doing work previously requiring a hundred people, or work nobody would have attempted at all because the resource cost was prohibitive.⁴ The ten-person startup isn't winning because it's small. It's winning because every person in it is operating at a scale the larger company hasn't figured out how to unlock.
So when the thousand-person company responds by cutting to five hundred, it hasn't matched the startup's advantage. It's just made itself a five-hundred-person company with the same structural limitations, the same workflows, the same assumptions about how value gets created. Except now it has half the people available to question those assumptions.
This is, I think, the part nobody wants to talk about, because it requires admitting something uncomfortable: the layoffs aren't really about AI at all. They're about a particular theory of what a company is for, and that theory predates AI by decades.
The theory goes something as follows: a company is a cost structure with revenue on top. Reduce the cost structure, and you've improved the company. This theory treats humans as line items, and it treats line items as things to minimize. It is elegant in the way a diet is elegant: fewer calories in, more weight lost. Simple. Clean. And, past a certain point, self-defeating.⁵
I've spent years working inside large organizations trying to make their tools and processes better, and one thing I can tell you with confidence is this: the companies treating headcount as a cost to minimize are almost never the companies producing anything remarkable. They're the ones producing adequate work at declining morale. This is a combination with a surprisingly short shelf life.
There's a concept in organizational theory called organizational slack, first articulated by Richard Cyert and James March in their 1963 book A Behavioral Theory of the Firm. Slack, in their formulation, is the difference between a company's total available resources and the minimum resources needed to keep the operation running. In traditional economics, slack is waste. Something to be eliminated. Cyert and March argued the opposite: slack is what allows organizations to absorb shocks, to experiment, to adapt when the environment changes unexpectedly.⁶
Slack is what allows organizations to absorb shocks, to experiment, to adapt when the environment changes unexpectedly
Nitin Nohria and Ranjay Gulati, both at Harvard, extended this idea in a 1996 paper by showing an inverted-U relationship between slack and innovation. Too little slack, and you can't experiment because there's no margin for failure. Too much, and discipline evaporates because nothing has consequences. But in the middle, in the zone where resources exceed the bare minimum without becoming absurd, organizations do their best work. They take intelligent risks. They try things. They have enough breathing room for someone to say, "What if we did this differently?" without being told there's no budget for differently.⁷
Now think about what mass AI-driven layoffs do to this curve. They don't nudge companies toward the optimal middle. They shove companies hard toward the left edge, the zone of too-little-slack, where every remaining employee is stretched across the work of two or three former colleagues, where nobody has time to think about anything except today's tickets, where the institutional knowledge walked out the door with the people who were just "right-sized" into unemployment.⁸
I watched this happen in real time at multiple companies I won't name, though you'd recognize some. I've read stories about a tech company cutting thirty percent of their support organization and replacingd the gap with AI chatbots and automated routing. For about six months, the metrics looked fantastic. Resolution times dropped. Cost-per-ticket dropped. The dashboards were a symphony of downward-trending lines, and everyone who'd signed off on the cuts got to feel vindicated at quarterly reviews.⁹
Then the edges started fraying.
The chatbots couldn't handle ambiguity. The remaining human agents, now drowning in escalated cases they didn't have bandwidth to properly investigate, began resolving tickets by defaulting to the fastest available answer rather than the most accurate one.¹⁰ Customer satisfaction scores, which had a longer feedback loop than resolution time, started sliding. And the institutional knowledge required to train the AI models (because those models need to be trained on something, and the something is usually the accumulated expertise of the people you just fired) began degrading because the people who held it were gone.
This is the paradox nobody in the efficiency narrative wants to confront: AI systems don't replace human expertise. They amplify it. And you can't amplify something you've eliminated.
It's a bit analogous to, and I recognize this metaphor might not survive close scrutiny, building a speaker system. You can have the most sophisticated amplifier in the world, the kind of thing audiophiles lose weekends to configuring. But if you've removed the source signal, if there's nothing going into the amplifier, all you get is a clean, well-powered silence. The technology is doing its job perfectly. There's just nothing for it to work with.¹¹
The smarter play, the one almost nobody seems to be making, is the opposite of subtraction. It's multiplication.
Imagine (and I don't have to imagine this, because I've seen early versions of it working) a company with a thousand employees decides not to cut anyone. Instead, it restructures. It doesn't add AI to existing workflows; it redesigns the workflows around what AI makes possible. It trains every single employee to work with AI tools. It redistributes responsibilities not to do the same work with fewer people, but to do more and better work with the same people.
What does this company look like? It looks, functionally, the way a ten-thousand-person company used to look. Each employee, augmented by AI, is operating at a level of output and complexity previously requiring a team.¹² The company's overhead hasn't changed much. Its revenue capacity has exploded. And its competitor, the one that cut to five hundred people and called it transformation, is now trying to compete against what amounts to an organization ten to twenty times its effective size.
I keep reminding myself about Cyril Northcote Parkinson, who in 1955 observed in a satirical essay for The Economist that work expands to fill the time available for its completion. Most people cite this as a warning against slack (and yes, I just argued for slack three paragraphs ago; hold on). But there's an underappreciated corollary: when you compress the time available, you don't just get the same work done faster. You get different work. You get work shaped by constraint, which is often more creative, more focused, and more valuable than work shaped by abundance.
The inverse is also true. When you give talented people AI tools compressing the boring parts of their jobs, the drudgery, the repetitive data entry, the ticket triage, the meeting summaries nobody reads, you don't get people sitting idle. You get people filling that freed time with the work they always wished they could do but never had bandwidth for.¹³ The senior engineer who spent forty percent of her week on documentation starts building the internal tool she's been sketching on napkins for two years. The support agent who spent hours writing the same email with minor variations starts analyzing patterns across tickets and proposing process improvements nobody asked for.
This is what multiplication looks as, in practice. Not fewer people doing old work. More people doing new work.
And here's where the argument gets uncomfortable for the efficiency and cost-cutting crowd, because the thing being described isn't efficiency at all. It's something closer to what organizational theorists call absorptive capacity, the ability of an organization to recognize, assimilate, and apply new external knowledge.¹⁴ Absorptive capacity isn't a function of how lean you are. It's a function of how many people you have who understand enough about the business, the customers, the technology, and the competitive landscape to do something useful with new information when it arrives.
Fire half those people, and your absorptive capacity doesn't decline by half. It declines by more than half, because knowledge inside organizations isn't distributed evenly; it's networked. Lose a node, and you lose every connection passing through it.¹⁵
I think about this when I read the breathless announcements. PayPal cutting nearly five thousand jobs in an "AI overhaul." Block cutting its headcount almost in half because, in Jack Dorsey's words, "intelligence tools have changed what it means to build and run a company." The language is always aspirational, always forward-looking, as though the company is upgrading rather than amputating.
But upgrades add capability. Amputations remove it. And the question nobody seems to be asking is: what capabilities just left the building?
Goodhart's Law, the observation by economist Charles Goodhart that when a measure becomes a target it ceases to be a good measure, applies here in a way I haven't seen anyone articulate clearly. Headcount became a measure of cost. Then it became a target for reduction. And the moment it became a target, it stopped being a useful measure of anything except compliance with a directive to get smaller.¹⁶ Nobody's measuring what the company lost when it hit that number, because loss is hard to quantify, and quarterly earnings calls prefer numbers you can put on a slide.
The guy at the coffee shop finished his story. His company was doing great, he said. Leaner. Faster. He said "faster" twice. I asked him what they were doing with all that speed, what new products they were building, what markets they were entering, what problems they were solving now that they had all this extra capacity.
He paused. It was a long pause. He said they were focused on integration right now. Getting the AI systems working smoothly. Ironing out the kinks.¹⁷
So they'd fired four hundred people and were now spending all their time getting the tools to work as well as the people had.
I didn't say this. I said something about how transitions are always messy. He agreed. We moved on to talking about whether the Padres had a shot this season, which was a question with a cleaner answer and lower stakes.¹⁸
But I keep coming back to it. This idea that the companies cutting their way into the AI future are going to wake up one morning and realize they've been competing against organizations with ten times their effective workforce, organizations making the less intuitive, more expensive, and ultimately more intelligent choice to keep their people and multiply their capacity rather than subtract it. The race isn't going to the lean. It's going to the dense.
And the ones who got smaller? They'll be fast, sure. Fast and empty, moving at great speed toward a destination they no longer have enough people to find.
¹ The "lean into" construction has become the corporate equivalent of "I'm not mad, I'm just disappointed." It signals effort without admitting vulnerability. Everyone leans into things now. Nobody ever leans away from things, which is sometimes the more honest direction. ↩︎
² I should note I'm describing a pattern, not a universal rule. Some companies have genuinely used AI adoption as an occasion for thoughtful restructuring. They're just not the ones making headlines, because "Company Keeps Everyone and Gets Better" is a less compelling news cycle than "Company Slashes Workforce by 40%." ↩︎
³ The coal example is almost too clean, which makes me suspicious of it in the way I'm suspicious of any historical analogy mapping perfectly onto a contemporary situation. But the underlying logic, that efficiency gains often increase total demand rather than decrease it, has been validated across enough contexts to take seriously. ↩︎
⁴ There's a version of this where I sound naively optimistic about AI's capabilities, and I want to be clear: I'm not. Current AI tools are powerful and flawed in roughly equal measure. But even their current, imperfect state is enough to dramatically expand what a single human can accomplish in a given workday. ↩︎
⁵ The diet metaphor is reductive, I know. Bodies aren't companies. But the underlying dynamic is real: past a certain point, cutting calories doesn't produce fitness. It produces a different kind of decline, one measured in lost muscle mass and metabolic slowdown rather than quarterly earnings, but the mechanism is the same. ↩︎
⁶ This is one of those ideas feeling obvious once you hear it and then gets forgotten the moment someone walks into a meeting with a cost-reduction target. The history of management theory is littered with ideas everyone agrees with in the abstract and ignores in practice. ↩︎
⁷ The inverted-U shape shows up in so many domains, exercise intensity and performance, stress and productivity, spice and flavor, that you'd think we'd have learned to look for it by now. We haven't. The instinct to optimize by pushing a variable to its maximum (or minimum) is surprisingly resistant to evidence. ↩︎
⁸ "Right-sized" remains one of the more cynical euphemisms in the corporate lexicon, rivaled only by "let go" (as though the company was holding on and generously released its grip) and "reduction in force" (which abbreviates to RIF, a word sounding uncomfortably close to "riff," as in something you play casually). ↩︎
⁹ I want to be careful here. The people who signed off on those cuts weren't villains. They were operating inside a system rewarding short-term metric improvement and punishing long-term thinking. The problem is structural, not moral. Though the structural problem does produce moral consequences. ↩︎
¹⁰ This is a known failure mode in overtaxed support systems: when humans lack the time to investigate properly, they satisfice. They choose the answer with the highest probability of being good enough rather than the answer with the highest probability of being correct. These are two different things, and the gap between them is where customer trust goes to erode. ↩︎
¹¹ I realize I just snuck an audiophile metaphor into a business essay and I'm not apologizing for it. The signal-to-noise ratio metaphor is too apt: AI amplifies the signal, but someone has to generate the signal in the first place. No signal, no amplification. Just very expensive equipment doing nothing. ↩︎
¹² The math here is rough and illustrative, not precise. But the directional claim, that AI-augmented employees can produce output at a scale previously requiring larger teams, is borne out by early adopter data across multiple industries. The exact multiplier varies. The direction doesn't. ↩︎
¹³ This is the part of the argument where skeptics reasonably point out that many employees, freed from drudgery, will simply fill the time with different drudgery or with meetings. Fair. Parkinson's Law cuts both ways. The difference is whether the organization has been intentionally restructured to channel freed capacity toward valuable work, or whether it just handed out AI tools and hoped for the best. The former works. The latter doesn't. ↩︎
¹⁴ The concept of absorptive capacity was formalized by Wesley Cohen and Daniel Levinthal in a 1990 paper for Administrative Science Quarterly. It's one of those academic constructs explaining something everyone intuitively knows but rarely articulates: you can only learn new things if you already know enough related things to make the new things make sense. ↩︎
¹⁵ Network effects in organizational knowledge are real and underappreciated. The person who gets laid off doesn't just take their own expertise. They take every informal connection, every hallway conversation, every "hey, do you remember why we did it this way?" The org chart doesn't capture these connections. HR spreadsheets don't capture them. They exist only in the living tissue of the organization, and once severed, they don't grow back. ↩︎
¹⁶ There's a whole essay to be written about how the metrics dashboards supposed to make companies smarter have, in many cases, made them dumber by creating the illusion of understanding where none exists. But I'll save it. ↩︎
¹⁷ "Ironing out the kinks" is the corporate equivalent of "I'm fine." It means the opposite of what it says. ↩︎
¹⁸ They don't. But it was nice to talk about something where being wrong didn't result in anyone losing their job. ↩︎