What Happens When the Expensive Part Gets Cheaper, and Why Nobody Believes the Answer
So there I was in the Safeway checkout line, not doing anything particularly noteworthy, holding a bag of pre-washed arugula I almost assuredly did not need, when the man ahead of me started arguing with the self-checkout kiosk. Not arguing in the metaphorical sense, either. He was speaking to the machine in full sentences, reasoning with it, pleading his case about a coupon the scanner wouldn’t accept. “I have the coupon right here,” he said, holding his phone up to the blinking red light. “It’s valid. It says valid right on it.” And the machine, as machines do, just beeped. Twice. The same beep. It did not care about the coupon, or the man, or the distinction between valid and invalid. It was, as far as I could tell, doing its job with the serene indifference of something that had never once questioned whether its job was worth doing.
An employee appeared, badge slightly crooked, scanned her override card, and in four seconds resolved what the man had spent two minutes contesting.1 I stood there watching this and thinking about something I’d read that morning: a news item about a wearable tech company called WHOOP that had just announced plans to hire more than 600 people, nearly doubling its workforce.2 In 2026. While half the industry was laying people off.
The WHOOP thing stayed with me because of something its CEO, Will Ahmed, said in the announcement. He was asked about the tension between hiring humans and investing in artificial intelligence, and his answer was the kind of sentence that sounds boring until you actually sit with it: “We are doing both.” Not “we’re replacing people with AI” or “we’re augmenting our team” or whatever consultant-speak you’d expect from a press release. Just: we are doing both. Hiring people and building AI. Simultaneously. As though the two activities were not, in fact, locked in a zero-sum death match for organizational resources.3
Which, okay. A reasonable person might hear that and think: sure, that’s just corporate messaging. Companies say things. They say them in press releases and then do whatever the quarterly earnings call requires. And the skepticism isn’t wrong, exactly. It’s just incomplete. Because what Ahmed was describing, whether he meant to or not, was the practical application of a very old idea, one that a Victorian-era English economist named William Stanley Jevons figured out in 1865 while everyone around him was panicking about coal.4
Here’s the short version. Jevons noticed that as steam engines became more efficient (requiring less coal per unit of work), coal consumption didn’t drop. It rose. Dramatically. Because efficiency made coal-powered work cheaper, and cheaper work meant people found more things to do with it. More factories. More railways. More everything. The better the engine, the more coal Britain burned.5 This is what economists now call the Jevons paradox, and it is one of those ideas that, once you see it, keeps appearing everywhere: in LED bulbs that were supposed to reduce energy consumption but instead got installed in quantities that more than offset the savings, in wider highways that were supposed to reduce traffic but instead attracted more drivers, in the entire history of computing, where every generation of faster, cheaper processors has been answered not with “great, we can do the same things for less” but with “great, what new things can we now attempt?”6
So. AI is making the cost of intelligence drop. That’s real. The doomer interpretation says: cheaper intelligence means fewer humans needed to produce intelligence, which means fewer jobs, which means economic displacement on a scale we can’t fathom. And the doomer interpretation isn’t stupid. It’s a reasonable extrapolation from one set of premises. The problem is that it relies on a very specific (and, historically, very wrong) assumption about how economies respond to efficiency gains. It treats human labor as a fixed quantity, a pie with a set number of slices. Economists have a name for this assumption, too: the lump of labor fallacy. The idea that there’s only so much work to go around, so if a machine does some of it, a human somewhere must be sitting idle.7 It sounds intuitive. It feels true the way a lot of things feel true right before you discover they aren’t.
But here’s what I keep turning over. The cost reduction frame (how many fewer people do we need?) assumes a fixed pie of value and optimizes for how efficiently you capture your slice. The ambition frame (what can we do now that was previously impossible?) assumes the pie was artificially constrained by the cost of execution, and that removing the constraint creates a larger opportunity than all the savings could. The history of technology tells us which frame wins. When steel got cheap, the industry didn’t just make the same amount of steel for less money; it expanded into skyscrapers, railroads, automobiles, categories of construction that hadn’t existed when steel was expensive. When computing got cheap, it didn’t just make the same calculations faster; it created personal computing, the internet, mobile, cloud, entire civilizations of software. And when distribution got cheap, the media companies that played defense got obliterated by companies that built new categories of content nobody had imagined.8
I keep coming back to that guy at the self-checkout. Not because it’s a perfect metaphor (it isn’t, or maybe not) but because it captures something about the emotional texture of this moment.9 The kiosk was doing the rote work of scanning items. It was doing it fine. The man wasn’t mad at the kiosk for being incompetent; he was frustrated because the situation had become one requiring judgment, context, and a tiny bit of social negotiation, and the kiosk had no access to any of those things. The employee with the override card wasn’t “more efficient” than the machine. She was a different kind of capable. She could look at the coupon, look at the man, make a judgment call about whether the thing was valid enough, and resolve the situation in a way that left the man feeling heard rather than processed.10
This is, I think, where the conversation about AI and work keeps going sideways. We keep framing it as a question of replacement (can the machine do what the person does?) when the more operative question is one of expansion (what becomes possible when the expensive part gets cheaper?). When Satya Nadella posted about Jevons paradox on social media in January 2025, right after DeepSeek rattled the markets, he was making exactly this point.11 If AI compresses the cost of cognitive work, demand for cognitive work doesn’t contract. It explodes. The same way steam power didn’t reduce the need for physical labor but instead created entirely new categories of physical labor that hadn’t existed before.
Consider what this actually looks like in practice, because it’s easy to stay abstract about it, and the abstraction is where the doom narrative lives most comfortably. Right now, inside most companies, there are hundreds of things nobody does because doing them is too expensive, too slow, or too dependent on scarce expertise. Market analyses that never get run. Customer segments that never get studied. Product ideas that never get prototyped.12 Not because nobody thought of them, but because the time and talent required to execute them exceeded the resources available. A company will look at a $10 million market opportunity and decline to pursue it because the engineering team costs $3 million a year. They’ll look at an R&D project with a 20% shot at success and pass because failure would cost two quarters of road map. These calculations made perfect sense when execution was expensive. They make no sense at all when execution cost drops by an order of magnitude.
And this is where it gets (quietly, unexpectedly) hopeful.13 When you compress the cost of building something, you don’t just make the same things cheaper. You make previously irrational bets rational. The $10 million market becomes viable. The 20%-odds experiment, you can run five of them. The internal tool that some operations manager has been sketching on a whiteboard for three years, the one she knows would save her team forty hours a week but could never justify the engineering resources to build, suddenly she can describe what she needs and an AI agent can produce a working prototype in an afternoon. That’s not science fiction. Platforms are already putting production-quality development in the hands of non-coders right now.
Which means something enormous is shifting, something I don’t think we’ve fully reckoned with yet. We have maybe 35 or 40 million software developers in the world. And we have hundreds of millions of legitimate domain experts: the doctor who knows what software her patient panel needs, the logistics manager who can draw the warehouse routing algorithm on a whiteboard, the teacher who knows exactly what adaptive learning her students require.14 All of them have been locked out of building by what you might call the translation layer: the gap between knowing what should exist and making it exist as a piece of software. That translation has always been lossy, slow, expensive. And it’s dissolving. When the doctor can describe her needs and an agent can build the thing, we go from 40 million builders to hundreds of millions of builders practically overnight. The total surface area of human problems addressed by custom software expands by an order of magnitude. Maybe two.
And then the question, the one that actually matters for anyone trying to figure out their place in all this, shifts from “how do I compete with the machine?” to “what do I know that the machine doesn’t?” Domain expertise. Customer empathy. Contrarian market insight. Creative vision. The ability to generate good hypotheses. The ability to look at a problem and see a solution nobody else has seen.15 These are the capacities that become not just valuable but scarce in a world where execution is cheap. Today, the person with the brilliant product intuition spends 80% of her energy shepherding a single bet through the organization, navigating stakeholders, managing sprint cycles, writing specifications that get misinterpreted, waiting. Tomorrow, she’s generating and evaluating ten bets a week. The bottleneck shifts from “can we build it?” to “should we build it?” And “should we build it” is a human question.
There’s a version of this that sounds relentlessly optimistic, and I want to be careful about that, because relentless optimism about technology has a bad track record of being weaponized to dismiss legitimate concerns.16 None of which is to say that transitions are painless, or that every person displaced by automation lands gracefully in a new role. They don’t. The gap between “the economy creates new jobs” and “those new jobs go to the specific people who lost the old ones” is wide, and real people fall into it. The creative destruction that Joseph Schumpeter wrote about so admiringly has always been more comfortable to admire from a tenured university office than from a factory floor. The question isn’t whether disruption hurts; it’s whether the hurt is the whole story.
And the people making actual decisions (not the pundits, not the LinkedIn thought leaders, but the operators with P&L responsibility and boards to answer to) seem to be arriving at an answer that looks a lot less apocalyptic than the headlines suggest.17 The companies doing ambitious things aren’t choosing between AI and people. They’re choosing both, and then asking a question I think is the real one, the one that determines who thrives over the next decade: what would it take for our people to work differently, to build what we couldn’t build before? That question is sneakily radical. It assumes people still matter (which, I know, low bar, but in 2026 you apparently have to state this explicitly). It assumes the bottleneck isn’t talent but rather the absence of tools that let talent operate at its actual capacity. And it reframes the entire AI conversation from a defensive posture (how do we protect what we have?) to an offensive one (what can we go get?).
Think about what the world needs and doesn’t have. Personalized education that adapts to individual learners. Clinical decision support for individual patients. Financial planning for the roughly two billion adults worldwide who have a bank account but no financial adviser. These are unsolved economic problems, not unsolved technical problems. The cost of building the software to address them has simply been too high. The hardest work ahead isn’t a technical challenge. It’s figuring out what upskilling looks like when the job isn’t “do the same thing faster” but “do something you’ve never been asked to do before.” That’s a different world. And nobody can tell you what the specific new categories of work will be, because that’s the nature of the thing.18 In 1995, nobody looked at cheap internet bandwidth and predicted ride-sharing, social media, the gig economy. In 2005, nobody foresaw that “podcast producer” or “cloud architect” or “prompt engineer” would appear on a résumé. The new categories emerge from the collision of cheaper capabilities and human imagination, and human imagination is the one resource that has never, not once in recorded economic history, failed to expand when given room.
The arugula, by the way, wilted before I used it. It almost always does. I buy it for aspirational reasons, because the version of me who eats arugula regularly is a better version, a version with a functioning meal-prep routine and a refrigerator organized by expiration date. This version does not exist and has never existed. And yet I keep buying the arugula, each time convinced this week will be different. My wife pointed out, when I told her about this essay, that I’ve been doing this for approximately four years. Which is either a damning indictment of my capacity for self-deception or a hopeful sign about the persistence of human optimism in the face of repeated contrary evidence. She says the first. I choose the second.
Or maybe it’s just arugula. Sometimes it’s just arugula.
1. The override card, I later learned, is basically a skeleton key that exists because whoever designed the self-checkout system knew, at some level, that the system would regularly encounter situations it couldn’t resolve on its own. Which is a revealing design choice if you think about it for more than two seconds. ↩︎
2. WHOOP is the wearable health technology company based in Boston. They make the fitness band your most annoyingly fit friend won’t stop talking about. The hiring announcement was March 2026. They accept roughly 1 out of every 750 applicants, which means getting hired at WHOOP is statistically harder than getting into several Ivy League schools. Make of that what you will. ↩︎
3. This “zero-sum” framing, where AI gains necessarily mean human losses, is so pervasive it’s become the default opening for almost every mainstream media article about AI and employment. Try to find one that doesn’t use the word “replace” or “displace” within the first three paragraphs. I’ll wait. ↩︎
4. Jevons was only 29 when he published The Coal Question in 1865. It became a bestseller and reportedly influenced government policy. He drowned in 1882 at the age of 46 while swimming, which is one of those biographical details that feels like it should mean something but probably doesn’t. ↩︎
5. The specific figures are staggering. Britain’s coal consumption increased roughly tenfold between 1800 and 1860, despite (or because of) massive improvements in steam engine efficiency during the same period. Jevons’ argument wasn’t theoretical. It was observational. He was describing what had actually happened, not predicting what might. ↩︎
6. The LED example is one of my favorites because it’s so viscerally demonstrable. Walk through any modern office building and count the light fixtures, then think about how many of those fixtures would exist if each one drew the electricity of an incandescent bulb. The efficiency of the LED didn’t reduce our consumption of light. It made light so cheap that we now illuminate things we would never have bothered to illuminate before. Parking garages at 2 a.m. The underside of kitchen cabinets. The entire exterior of a Cheesecake Factory. ↩︎
7. The lump of labor fallacy has been documented and debated since at least 1891, when economist David Frederick Schloss first formalized the concept. Over a century later, it keeps resurfacing every time a new technology threatens to “take all the jobs.” We are, apparently, quite committed to this particular form of economic anxiety. ↩︎
8. The steel-to-skyscrapers example is one of those historical rhymes that’s almost too clean. Andrew Carnegie didn’t just make steel cheaper; he made an entirely new built environment possible. The Bessemer process didn’t optimize the existing market for iron. It created markets that couldn’t have existed before. The people who benefited most weren’t the ones who figured out how to use less steel. They were the ones who figured out what to do with more of it. ↩︎
9. I realize comparing a grocery store interaction to the future of global labor markets is a stretch. I’m making it anyway. ↩︎
10. There’s a whole body of research on what organizational theorists call “tacit knowledge”: the stuff humans know how to do but can’t fully articulate or encode. The employee at Safeway wasn’t running an algorithm when she decided the coupon was close enough to valid. She was exercising a judgment that drew on social cues, store policy she’d internalized, and a basic calculation about customer satisfaction that no one had ever written down for her. This kind of capability is very hard to automate and very easy to undervalue. ↩︎
11. Nadella posted “Jevons paradox strikes again!” on both LinkedIn and X on January 27, 2025, linking directly to the Wikipedia page for the paradox. This was, depending on your perspective, either a sincere intellectual observation or a very sophisticated bit of investor relations spin. Possibly both. The fact that he posted it at approximately 1 a.m. suggests either genuine enthusiasm or a social media team with unusual working hours. ↩︎
12. I’ve seen estimates suggesting that most knowledge workers spend 60% or more of their time on “work about work”: status updates, coordination meetings, searching for information, translating between formats. If AI could reclaim even a fraction of that time for actual problem-solving, the output capacity of existing teams changes dramatically. Not fewer people. The same people, doing more of the work that matters. ↩︎
13. I want to be careful with the word “hopeful” because hope, as someone I was recently listening to put it, is a plan we don’t have validation for. What’s different about the Jevons argument isn’t that it’s hopeful. It’s that it’s structural. The pattern is observable, repeatable, and has held across every major efficiency improvement in recorded economic history. That’s not hope. That’s a bet with evidence behind it. Which, now that I say it out loud, sounds a lot like hope with a spreadsheet. ↩︎
14. The number of domain experts who have been, effectively, locked out of building is something I think about a lot in my own work. Every company I’ve worked in has had dozens of people who could describe exactly what tool they needed, in detail, with edge cases, and then watched helplessly as the request entered a product backlog and died there. The backlog is where domain expertise goes to be forgotten. That’s changing. ↩︎
15. One pattern I notice in myself: my instinct when I hear “domain expertise is the new bottleneck” is to feel vaguely reassured, and then immediately suspicious of the reassurance. Am I believing this because it’s true, or because it’s comforting? Probably both, which is uncomfortable. But the historical record is fairly consistent on this point: when execution gets cheap, the people who understand the problem space become more valuable, not less. The accountants after spreadsheets. The bank tellers after ATMs. The same pattern, every time. Though I could be wrong about this. ↩︎
16. This is the part where I resist the urge to say something about “retraining programs” or “safety nets,” not because those things don’t matter but because the impulse to immediately pivot to policy solutions often serves as a way of not sitting with the actual human cost for even one uncomfortable moment. So: people get hurt in transitions. That’s real. I don’t want to skip past it. ↩︎
17. One of the patterns I keep noticing: the most confident predictions about AI’s impact come from people furthest from the actual decision-making. The executives I see grappling with this in real time are noticeably less certain and noticeably more pragmatic than the commentariat. They’re not performing for analysts. They’re trying to figure out what their people need in order to work differently. The gap between the media conversation and the boardroom conversation is, frankly, enormous. ↩︎
18. This uncertainty is, I realize, deeply unsatisfying. People want specifics. They want someone to say “the twelve jobs of the future are…” and then list them. But anyone who claims to know what specific jobs will emerge from a technology still actively being invented is either lying or confused. Probably both. What we can say, based on every previous efficiency revolution, is that the new categories will emerge. That the total amount of work will grow. And that the work will be different in ways we cannot fully predict from inside the old world. Which sounds like a cop-out, and maybe it is, but it’s also what the actual evidence supports. ↩︎