The Notebook You Can't Take With You, or, What Happens When Your Prosthetic Memory Belongs to Someone Else

The Notebook You Can't Take With You, or, What Happens When Your Prosthetic Memory Belongs to Someone Else
Photo by Wesley Tingey / Unsplash

So my colleague Shelley sends me a Slack message on a Wednesday afternoon, something about a support ticket he can’t find, and I watch myself do something I’ve never done before: instead of searching for the ticket myself, I type a quick prompt into an AI assistant that, over the past four months, has accumulated enough context about my team’s workflows to know which Salesforce queue Shelley probably means, which customer he’s probably referencing (based on a complaint thread from last week that I’d forgotten about but the tool hadn’t), and where the ticket probably got mislabeled. The answer comes back in six seconds. And here’s what stops me: the answer is correct, and I have no idea how I would have arrived at it on my own. Not because the information was hidden, but because the path to finding it required remembering a sequence of small decisions I’d made over several weeks, decisions I’d already flushed from active memory the way you forget the specific turns you took on a familiar drive.1

I want to be careful about what I’m saying here, because this isn’t a story about AI being magical or terrifying or whatever adjective the discourse has settled on this week. It’s about something much more ordinary and (I think) more unsettling: the slow, almost imperceptible process by which your tools start to know you, and what it means when those tools belong to somebody else.

There’s a thought experiment in philosophy that’s been kicking around since 1998, when Andy Clark and David Chalmers published a paper asking what sounds, on first hearing, a trick question: where does the mind stop and the rest of the world begin?2 Their answer, which came to be called the extended mind thesis, was that it doesn’t stop where you think it stops. They described a character named Otto, who has Alzheimer’s and uses a notebook to store everything he needs to remember. The notebook, Clark and Chalmers argued, functions as part of Otto’s mind. Not metaphorically. Functionally. The information in the notebook plays the same cognitive role as the information in your biological memory; the fact that it lives outside your skull is, from a functional standpoint, irrelevant.

This was a provocation, and it landed as one. But twenty-seven years later, it reads less as philosophy and more as a description of Tuesday.3 Because here’s what’s happened in the intervening decades: the notebook has gotten very, very sophisticated. It doesn’t just store what you write in it. It watches what you do. It notices patterns you don’t notice yourself. It learns which emails you respond to in five minutes and which ones you ignore for days. It knows you always reschedule your Thursday 2 p.m. It knows your VP’s messages get a different response cadence than your direct reports’ messages. And the notebook (this is the part that would have given Clark and Chalmers pause, I think) doesn’t belong to you.4

I’ve been thinking about this because of something that leaked recently from one of the major AI companies: internal documentation for a persistent agent system designed to run as an always-on sidebar in your digital life.5 Not a chatbot you summon when you have a question, but something closer to a second brain that stays awake while you sleep, monitoring your communication channels, drafting responses, cross-referencing documents, preparing you for meetings based on patterns it’s observed over months of watching you work. The demos, naturally, are stunning. You wake up. The agent has already triaged your inbox. It’s noticed a thread in your engineering Slack where someone asked about authentication architecture, pulled context from a design doc you reviewed last month, and drafted a reply. It knows which emails are routine and which ones need your actual eyes. You haven’t typed a word.

What nobody talks about in the demos (and I say this as someone who genuinely finds this technology useful, who uses these tools daily for work, who is not a Luddite or a doomer or whatever the current term is for people who express reservations)6 is what happens when you try to leave.

There’s a concept in economics called vendor lock-in, and it’s about as elegant as its name suggests. You buy into a system. The system accumulates your data. Leaving means abandoning your data, or spending enormous resources migrating it. Microsoft figured this out with Active Directory in the 1990s. Salesforce figured it out with customer records. Slack figured it out with communication history. These are all instances of what economists call switching costs, and they share a common feature: the locked-in asset is stuff. Files. Records. Messages. Stuff is painful to migrate, sometimes ruinously so, but it’s at least conceptually portable. You can export a CSV. You can hire a consultant. The switching cost is measured in months and money.7

But what these persistent AI agents lock in is something different. Not your files, but the patterns the agent learned by watching you use them. Not your Slack messages, but the understanding of which messages you respond to quickly and which you let sit. Not your calendar, but the accumulated knowledge of how you actually manage your time versus how you claim to manage it.8 There’s no CSV for that. There’s no export function for the model-of-you an agent has built over six months of continuous observation. When you switch, you don’t lose a tool. You lose the six months of compounding that made the tool useful. You’re back to (as one commentator put it) a brilliant stranger you have to explain everything to.

I want to pause here because I can feel myself sliding into the kind of techno-dystopian register that’s easy to inhabit and hard to make useful. So let me ground this in something personal.

Last year I set up a knowledge management system for myself, organized across different areas of my life, with a dedicated file in each area that tells whatever AI assistant I’m using how to behave in that context.9 Over time, these systems accumulated something I can only describe as residue. Not data, exactly. More the accretion of small decisions about how information should be organized, which connections matter, what my priorities actually are versus what I say they are. When I recently tried to migrate part of this system to a different platform, I discovered something uncomfortable: the organizational logic had become so entangled with the specific tool’s way of representing information that the “knowledge” didn’t survive the transfer. What moved over was the text. What didn’t move was the meaning.

What persistent AI agents do is something analogous but inverted: they close loops you didn’t even know were open. They complete cognitive tasks you hadn’t consciously started. And in doing so, they relieve a tension you’d been carrying without realizing it, which means you stop maintaining the neural pathways that would have let you do the work yourself.11

Annie Murphy Paul, in her book The Extended Mind, describes how “extra-neural” resources, the physical spaces we work in, the movements of our bodies, the minds of the people around us, participate in cognition in ways we systematically undercount.12 Her argument updates Clark and Chalmers for an era in which the “notebook” is no longer a notebook but an entire computational ecosystem. And the thing about ecosystems is they have owners.

Here’s what I keep circling back to. The technology industry appears to have converged, across multiple companies simultaneously, on a single strategic insight: the model is a loss leader. The money isn’t in making the AI smarter (though they’ll keep doing that). The money is in owning the persistent layer, the always-on agent that holds your memory, your context, your workflows, your patterns. Whoever owns that layer has lock-in at a depth that makes Microsoft’s 1990s playbook look quaint. Not because the product is better (it might be, it might not), but because the switching cost is measured in something we don’t have units for yet. Months of behavioral context. The accumulated understanding of how you think.13

And the playbook, if you’re watching, follows a pattern so familiar it probably has a Wikipedia page: Step one, observe what the open-source community builds. Step two, build your own version of it inside your platform. Step three, make your version free or subsidized. Step four, make the external version expensive or impossible. Step five (and this is new), ship a proprietary extension format so the ecosystem builds for your surface, not the open one.14 It’s the same dynamic that played out between the open web and native mobile apps starting around 2008. The open standard (in this case, something called the Model Context Protocol, which multiple companies adopted as a universal connector between AI tools and data sources) provides the foundation and the credibility of openness. The proprietary layer on top provides the commercial advantage. It’s the Google Play Services pattern: Android is open source, sure, but the valuable stuff (maps, payments, push notifications, the app store) lives in a layer Google controls. You can technically build without it. In practice, nobody does.15

I notice, as I’m writing this, that I keep wanting to offer solutions, and then catching myself, because the honest answer is I don’t know what the solution is. Or rather, I know what it should be in principle (behavioral context should be portable; the model-of-you an agent builds should belong to you and travel with you) but I also know the historical success rate of “should” when opposed by “convenient” is not encouraging.16

What I do know is this: there’s a category of personal growth nobody writes self-help books about, because it’s boring and granular and doesn’t make for good Instagram content.17 It’s the growth that comes from paying attention to the infrastructure of your thinking. Not the thoughts themselves, but the systems in which thinking happens. Where does your memory live. What tools shape your attention. Which patterns are you developing intentionally and which are being developed for you by systems whose incentives may not align with yours.

The philosopher Clark, in his later work, argued that we are “natural-born cyborgs,” organisms so fundamentally inclined to merge with our tools that the merger itself is part of what makes us human. I believe this. I think it’s correct and beautiful and worth celebrating. But the Otto thought experiment, the one with the Alzheimer’s patient and the notebook, has always contained a problem the philosophical literature tends to gloss over: what if someone takes Otto’s notebook?18 Not his data, which maybe you could reconstruct. Not his files, which maybe you could export. But the six months of carefully organized, personally meaningful, idiosyncratically structured annotations that represent not his information but his relationship to that information. What then?

The question isn’t rhetorical. It’s becoming operational. The leaked documentation I mentioned earlier included details about an extension format that sits on top of the open protocol, creating a proprietary compatibility layer. Tools built for this format work inside the agent’s environment and nowhere else. This is the app store move. This is the moment when the platform transitions from “we provide a surface for your tools” to “your tools are our tools.” And if you’ve been building your extended mind inside that surface, well.

I don’t think there’s a way to think about personal growth in 2026 that doesn’t include thinking about this. Not because technology is destiny (it isn’t) but because the question of where your mind stops and the world begins is no longer academic. It’s a question with terms of service attached. It’s a question whose answer is being drafted, right now, in product roadmaps and enterprise procurement layers and extension format specifications, by people who may be well-intentioned but who are also, if we’re being honest, running a business.

What Otto’s notebook that Clark and Chalmers got exactly right, the thing that makes their thought experiment more relevant now than it was in 1998: the notebook only works as an extension of Otto’s mind because he trusts it. Because it’s constantly and immediately accessible. Because when he writes something in it, he endorses it as his own. The trust is the mechanism. Without it, it’s just a book with writing in it.

And trust, unlike data, is not something you can port between systems. You can’t export your trust. You can’t migrate your endorsement. You build it slowly, through hundreds of small interactions, and it becomes the invisible scaffolding on which the entire extended-mind architecture rests.

On a walk around the house yesterday, I realized I’d left my phone in my office and felt a specific kind of vertigo that wasn’t about missing calls or texts. It was the feeling of a phantom limb. A cognitive prosthetic, temporarily removed. And the vertigo wasn’t really about the phone. It was about the slow dawning recognition that I couldn’t quite reconstruct, from biological memory alone, several things I needed to know for the next morning. Not because I’m forgetful. Because I’d long ago stopped trying to remember them.

1 This is the part where, if I were being honest with my therapist (if I had a therapist), I’d admit that the six-second answer produced a feeling closer to relief than gratitude. The labor of remembering had been, without my noticing, slowly migrating from my brain to a system I’d been feeding context to for months. ↩︎

2 The paper was called “The Extended Mind” and was published in Analysis, Vol. 58, No. 1. It’s one of those papers that philosophy graduate students either love or hate, with very little middle ground. ↩︎

3 Or maybe more accurately, a description of Tuesday for a certain subset of knowledge workers who spend their days in Slack and Salesforce and Google Docs and who have, over the past eighteen months, gradually incorporated AI assistants into their workflows without any formal decision to do so. If you work in, say, forestry or plumbing, this might all sound very remote. Which is worth noting. ↩︎

4 I realize this sounds alarmist. I want to be clear: I’m not opposed to these tools. I use them. They make my work measurably better. The question I’m trying to articulate is not “should we use them?” but “what are we agreeing to when we do?” ↩︎

5 I’m being deliberately vague about the specifics because the details matter less than the pattern, which is being replicated across multiple companies simultaneously. If you follow AI industry news, you know which leak I’m referring to. If you don’t, the specifics aren’t necessary for the argument. ↩︎

6 The discourse around AI seems to permit exactly two positions: breathless enthusiasm or apocalyptic dread. The position “this is a useful tool with genuinely troubling ownership implications” doesn’t generate enough clicks to sustain a newsletter, apparently. ↩︎

7 The switching cost literature in economics is actually quite rich and goes back decades. The key insight, which I think applies here but in a distorted form, is that switching costs aren’t just financial. They’re psychological. They’re the cognitive load of having to relearn something you’d already internalized. ↩︎

8 That gap, between how you claim to manage your time and how you actually manage it, is one of the most interesting things a persistent agent can observe. And one of the most valuable things it can sell. ↩︎

9 This sounds more sophisticated than it is. It’s basically a folder structure with some markdown files that give instructions to AI tools about how to handle content in that area. But over time, the instructions got layered and specific in ways I didn’t anticipate. ↩︎

10 “Sort of” is doing a lot of work in that sentence, and I want to acknowledge that. The analogy isn’t perfect. ↩︎

11 There’s a use-it-or-lose-it quality to cognitive skills that neuroscientists have documented extensively. The brain is metabolically expensive, and it’s quite ruthless about pruning pathways that aren’t being used. If you outsource a cognitive function to an external system, the neural infrastructure that supported that function will, over time, be repurposed. This is not speculation; it’s basic neuroscience. ↩︎

12 Paul’s book is excellent and I recommend it without reservation, which is not something I say about many books that use the word “brain” on the cover. ↩︎

13 Someone will inevitably respond to this by pointing out that you could just, you know, take notes. Write things down. Maintain your own records of how you work. And they’d be right, technically. But they’d also be ignoring the very real fact that the whole value proposition of these tools is that they save you from having to do exactly that. You can’t simultaneously benefit from the convenience and maintain independence from it. Or you can, but it’s expensive in a way most people won’t sustain. ↩︎

14 If you were around in the early days of the web, this cycle is so familiar it’s almost boring. Embrace, extend, extinguish, as a certain company from Redmond used to say (or allegedly say, depending on which antitrust deposition you read). ↩︎

15 Amazon tried this with the Fire phone and Fire tablets: same open-source Android kernel, minus the Google proprietary layer. It flopped spectacularly. Not because the hardware was bad, but because the ecosystem had organized around Google’s layer, and an open kernel without the proprietary services was, for practical purposes, an empty room. ↩︎

16 The track record of “should” versus “convenient” in technology adoption: the open web should have won against native apps. Open-source should have beaten proprietary formats. Interoperable standards should have prevented platform lock-in. In each case, the architecturally correct answer lost to the answer that was easier to use on a Tuesday afternoon when you just needed to get something done. ↩︎

17 The self-help industrial complex has a blind spot for infrastructure-level personal growth, probably because “audit the ownership structure of your cognitive tools” is harder to turn into a morning routine than “journal for five minutes.” ↩︎

18 Clark and Chalmers do address this, briefly, by noting that Otto’s notebook should be considered as important to him as a biological organ, something he’d want to protect from harm. But they don’t address the scenario in which the notebook is a subscription service that can change its terms of use. Which, in fairness, was not really a foreseeable concern in 1998. ↩︎