The Bullet Holes You Can See Are Not the Ones That Killed You, or, What I Finally Learned About Success by Watching a Stranger Lose at a Claw Machine
There was a kid at the Peppermill in Reno last Tuesday, maybe ten or eleven, working a claw machine with the kind of focused intensity you normally associate with surgeons or bomb technicians. He'd clearly been at it a while. His mother was fifteen feet away, scrolling through her phone with the practiced indifference of someone who'd already lost this argument. The kid fed in another dollar, repositioned the claw with tiny adjustments, hit the button, and watched the claw descend, close loosely around a stuffed bear, lift it two inches, and drop it. He didn't react. He just fed in another dollar.1
I watched him do this four more times. Same precision. Same result. And what struck me wasn't the futility of it (we've all been that kid) but the absolute certainty on his face. He wasn't gambling. He believed, with total conviction, that he was solving a problem. That the machine operated according to discoverable rules, and that once he identified those rules, the bear would be his. He was applying a theory of success to a system specifically engineered to resist it.
I thought about that kid for the rest of the week, because I'd just spent several hours doing essentially the same thing, except my claw machine was a podcast.2
I'm going to state this plainly even though the plainness of it makes it sound either banal or grandiose, and I can't tell which: the entire architecture of modern success advice is built on a series of well-documented cognitive errors so fundamental that the advice isn't just unhelpful. It's the problem wearing the mask of the solution.
I need to be specific, because vague criticism of self-help is itself a genre, and a boring one. So let me start with the experts. Philip Tetlock, a psychologist now at the University of Pennsylvania, spent twenty years tracking the predictions of 284 experts across multiple domains (economics, political science, national security) and collected roughly 28,000 forecasts.3 The results, published in 2005, were not ambiguous: the experts barely outperformed random chance. Some did worse. And the finding that matters most for our purposes is this: the experts who were most confident, who spoke in clean declarative sentences and offered tidy frameworks, were consistently the least accurate.
Tetlock borrowed a distinction from Isaiah Berlin to classify these thinkers. Hedgehogs know One Big Thing. They have a framework, a formula, a central organizing principle, and they squeeze the world through it. Foxes know many small things. They're comfortable with contradiction, with uncertainty, with the phrase "it depends."4 The foxes were better forecasters by a significant margin. But foxes make terrible podcast guests, because they say things such as "well, there are multiple contributing factors and I'm honestly not sure." Nobody subscribes to a podcast for calibrated uncertainty. We subscribe for the person who says "Here are my seven principles" with the confidence of someone who has never once been wrong about anything.
And this connects to something David Dunning and Justin Kruger identified in 1999, which has since been popularized to the point of near-meaninglessness but still contains a genuinely uncomfortable truth.5 Their research demonstrated that in any given domain, the people with the least competence tend to have the most inflated self-assessments, not because they're stupid but because competence and the ability to recognize competence rely on the same underlying skills.6 If you don't know enough about forecasting to forecast well, you also don't know enough to recognize that you're forecasting poorly. Expertise breeds doubt. Ignorance breeds conviction. And conviction is what we reward with book deals and speaking fees and millions of downloads.
I think about this every time I listen to an interview with someone explaining their success. The confidence itself is a data point, and it's pointing in the opposite direction from where you'd expect. The person who can give you the cleanest, most compelling, most certain account of how they got where they are is, by virtue of that certainty, the person whose account you should trust the least. Not because they're dishonest. Because the clarity of their narrative is inversely proportional to its accuracy. The mess of competing factors, of timing, of accidents, of structural advantages they didn't earn and can't see: all of that has been compressed into a tidy story by a brain that evolved to find patterns whether or not patterns exist.
But (and this is where it gets genuinely strange) even when the people dispensing advice are legitimately accomplished, even when they have the credentials and the track record and the tax returns to prove it, their advice is still unreliable. Not because they're lying, but because success itself is far more random than any of us are comfortable admitting.
In 2006, Duncan Watts and his colleagues at Columbia University designed an experiment so elegant it makes me a little jealous. They created an online music platform called Music Lab, recruited over 14,000 participants, and gave them access to 48 songs by unknown bands. Participants could listen, rate, and download the songs. One control group made decisions independently, seeing only song names. The other groups were split into separate "worlds" where participants could see how many times songs had been downloaded within their own world.7
If success were primarily about quality, you'd expect the same songs to rise to the top in every world. A banger is a banger. The cream rises. Except it didn't. Not even close. A song that ranked 26th in the independent group finished first in one social world and 40th in another. Same song. Same quality. Completely different outcomes, determined almost entirely by the random sequence of who downloaded what first, which created a cascade of social influence that amplified tiny initial differences into massive gaps.8
Quality established a range. Luck determined the position within it. And the range was enormous.
Now here's the haunting part. None of the participants in the social-influence groups knew they were being influenced. From inside each world, success appeared to be a straightforward function of quality. The number-one song felt like the best song. It seemed obvious. Of course it was number one; listen to it, it's clearly the best. The randomness that produced the outcome was invisible from within the outcome. Which means when the artist of the number-one song goes on a podcast and explains the artistic choices and production decisions and work ethic that led to their success, they're telling a story that makes perfect sense to them. They're not lying. They genuinely believe their choices were the determinative factor. Because the human brain is, among other things, a machine for making randomness feel like intention.
And we know this at the neurological level now, thanks to Michael Gazzaniga's decades of research on split-brain patients.9 Gazzaniga identified what he calls the "left-brain interpreter", a module in the left hemisphere whose job is not to make decisions but to explain decisions after they've already been made. The conscious mind, it turns out, is not "The Room Where it Happens." It's the press office, issuing post-hoc rationalizations for actions that were initiated by processes it has no access to.10 We don't do things and then understand why. We do things and then invent why. And we believe our own press releases completely.
So when a successful founder sits down and reconstructs the causal chain of their success, what they're actually producing is a confabulation. A sophisticated, sincere, deeply felt confabulation, but a confabulation nonetheless. Their brain's press office has been drafting this particular release for years, smoothing out the randomness, connecting dots that were never connected in real time, editing out the luck and the timing and the cousin who happened to know somebody. And we sit there with our earbuds in, nodding, taking notes, believing we're receiving a transmission from the mountaintop when we're actually reading a press release from an office that doesn't know what happened in the "The Room Where it Happens" any more than we do.
Which brings us to Abraham Wald and the bomber planes, because this is an image I keep coming back to.11 During World War II, American bombers were returning from missions riddled with bullet holes. The military brass, sensible people operating on sensible logic, said: let's reinforce the areas where the planes are getting hit. Wald, a mathematician at Columbia's Statistical Research Group, said no. Reinforce the areas where the returning planes aren't damaged. Because the planes you're looking at are the ones that survived. The ones with damage in those other areas never made it home. You're studying the survivors and assuming they represent the whole population. They don't.
This is survivorship bias, and it is the foundational error of every success-focused podcast, book, and keynote address. We interview the planes that landed. We catalog their damage patterns. We build entire philosophies around their bullet holes. But the planes that got hit in the engine, the fuel line, the cockpit (the ones whose damage patterns would actually tell us something about what causes failure), those planes are scattered across occupied Europe. They don't get podcast deals.12
A genuine investigation of success would require studying failure at least as carefully (probably more carefully). It would require asking not just "What did you do right?" but "What did the ten thousand people who did the same things and failed do differently?" and being willing to accept the uncomfortable answer, which is often: nothing. They did nothing differently. They just got a different roll of the dice.13
But there is no money in that answer. There is no content strategy built around "success is largely random and your sense of personal agency, while emotionally necessary, is substantially an illusion." That doesn't sell supplements. It doesn't sell books. It doesn't create the anxiety loop (you are insufficient, here is the formula for sufficiency, the formula didn't quite work, perhaps you need the advanced formula) that keeps subscribers coming back.14 The entire economic model of the success-advice industry depends on you remaining in a state of optimistic dissatisfaction: convinced enough that a formula exists to keep consuming, dissatisfied enough with your results to keep paying.
And I want to be careful here, because there's a version of this argument that's just nihilism in an academic coat, and I don't think that's right either.15 Hard work matters. Skill matters. Making good decisions under uncertainty (which is different from following a formula) matters. Being kind to people, showing up when you said you would, doing the thing you're afraid of: all of this matters. It matters for its own sake, for the texture and meaning it adds to a life, even if it doesn't reliably produce the particular flavor of success that comes with podcast invitations and net-worth listicles.
What doesn't matter, what has never mattered, is the formula. The five habits. The morning routine. The framework. Because complex adaptive systems (which is what economies and careers and human lives are) don't have formulas. They have initial conditions, feedback loops, path dependencies, and emergent properties, none of which reduce to a numbered list.16 The formula is always retrospective. It's always the press office's best attempt to explain what happened. And the press office, bless its heart, is just making it up.
I went back to the Peppermill two days later. Different errand. The claw machine was still there, obviously, because claw machines are permanent. A different kid was at it, older, maybe fourteen, and she was doing something I hadn't expected. She was watching. She put her hands in her pockets and watched other people play. She watched the claw's grip strength. She watched which stuffed animals were wedged against the walls versus balanced on top of others. She watched the timing of the mechanism. She was, in the language of the research, being a fox. Gathering data rather than applying a theory.
After maybe five minutes of observation, she fed in a dollar, positioned the claw carefully, and hit the button. The claw descended, gripped a small green frog, lifted it, swung it over the chute, and dropped it. She missed. The frog tumbled back into the pile. She looked at the machine for another few seconds, then turned and walked away.
She wasn't angry. She wasn't even disappointed. She just looked like someone who'd gathered enough information to understand what she was dealing with. And decided to spend her dollar somewhere else.
1 The claw machine thing is possibly the most efficient metaphor for late capitalism I've encountered. You pay for the illusion of agency. The machine is calibrated to lose. And yet you keep playing because this time you've figured out the angle. The claw machine industry reportedly generates over $500 million annually in the US alone, which is an almost perfect ratio of hope to structural impossibility. ↩︎
2 I realize "consumption of advice" sounds clinical, and maybe condescending. But I mean it descriptively. There is a reason the verb we use for media is the same one we use for food. Both involve taking something in, and both can leave you feeling full without having received any actual nourishment. ↩︎
3 Tetlock's original study, published in 2005 as Expert Political Judgment, tracked predictions from 1984 to 2003. The sheer duration is part of what makes the findings so devastating. This wasn't a snapshot. It was a time-lapse of overconfidence. ↩︎
4 The hedgehog/fox distinction comes originally from a fragment by the Greek poet Archilochus and was popularized by Isaiah Berlin in his 1953 essay The Hedgehog and the Fox. Berlin meant it as a classification of intellectual temperaments, not a hierarchy. Tetlock turned it into one, which is, a very hedgehog thing to do. ↩︎
5 The irony of the Dunning-Kruger effect becoming pop-culture shorthand for "stupid people don't know they're stupid" is itself a kind of Dunning-Kruger effect. The actual paper is about domain-specific metacognitive failure, not general intelligence. But explaining that takes longer than a meme, so here we are. ↩︎
6 I should note that recent statistical critiques have questioned whether the Dunning-Kruger effect is partly a mathematical artifact of regression to the mean. The debate is ongoing and (predictably) the people most confident it's debunked are the ones who've read the fewest papers about it. ↩︎
7 The Music Lab experiment was published in Science in 2006. If you want the full paper, search for Salganik, Dodds, and Watts. It's one of those studies that, once you understand it, quietly restructures how you think about everything from Billboard charts to startup valuations to which of your tweets performs well. ↩︎
8 This is not to say quality doesn't matter at all. Watts and his colleagues found that the very best songs rarely did terribly and the worst songs rarely did brilliantly. Quality establishes a floor and a ceiling. But the space between that floor and ceiling is enormous, and what happens inside it is governed by dynamics that have nothing to do with merit. ↩︎
9 Gazzaniga's split-brain experiments, conducted beginning in the 1960s with Roger Sperry at Caltech, remain some of the most unsettling research in neuroscience. In one classic demonstration, a patient's right hemisphere was shown the word "walk." The patient stood up and began walking. When asked why, the left hemisphere (which hadn't seen the word) instantly fabricated an explanation: "I wanted to get a Coke." The speed of the confabulation is the disturbing part. There's no hesitation. ↩︎
10 Jonathan Haidt's metaphor, borrowed from Gazzaniga's research. Haidt uses it in The Happiness Hypothesis and again in The Righteous Mind to argue that moral reasoning works the same way: we have gut reactions first, then build justifications afterward. The press office is always drafting retroactive memos. ↩︎
11 Abraham Wald was a Hungarian mathematician working at Columbia University's Statistical Research Group during World War II. He died in a plane crash in India in 1950, at age 48. The fact that a man who saved countless lives by thinking clearly about airplanes died in an airplane is the kind of thing I'd normally call ironic, except I've been told I misuse that word. ↩︎
12 The Bureau of Labor Statistics puts the five-year survival rate for new businesses at roughly 50%. The ten-year survival rate is closer to 35%. If you're starting a business, you are statistically more likely to fail than succeed, which is information that no podcast hosted by a successful person will ever lead with. ↩︎
13 The psychologist Daniel Kahneman called this tendency to construct clean narratives from messy data the "narrative fallacy" (though the term was coined by Nassim Nicholas Taleb). Kahneman's point, which he made in Thinking, Fast and Slow, is that our need for coherent stories is so powerful it overrides our ability to tolerate ambiguity, even when ambiguity is the more honest position. ↩︎
14 The supplement and optimization industry, which is adjacent to (and frequently subsidized by) the success-advice industry, is projected to reach $300 billion globally by 2028. Much of it is marketed through the exact anxiety cycle described here: you are suboptimal, this product addresses the suboptimality, the relief is temporary, please reorder. ↩︎
15 I'm aware of the irony of writing an essay about the failures of success advice. This is, structurally, a piece of advice about why advice doesn't work, which means either I'm being usefully meta or I'm the snake eating its own tail. Probably both. Definitely both. ↩︎
16 Robert K. Merton, the sociologist who coined the term "self-fulfilling prophecy," also coined "role model" and "unintended consequences." He basically invented the vocabulary we use to talk about social dynamics and then watched everybody misuse it. There might be a lesson there. ↩︎