Nobody Loved the Ticket (and the Ticket Won Anyway)

Nobody Loved the Ticket (and the Ticket Won Anyway)
Photo by Ragnar Beaverson / Unsplash

I was viewing a Jira ticket last Wednesday, one of those mid-afternoon maintenance moments where you're toggling a status field from "In Progress" to "In Review" and wondering, not for the first time, whether the cumulative hours you've spent performing this exact gesture over the past decade add up to time you could have spent learning the cello. The white-noise hum of an air purifier was doing its usual thing. My coffee had gone cold in a way I wouldn't notice for another twenty minutes. And I had this thought (which probably says more about my current headspace than any grand insight): the ticket I was updating had been open for eleven days, touched by four people, and its comment thread told a more honest story about our team's decision-making process than any retrospective I've ever sat through.1 I closed the browser tab and moved on to the next one. But the thought stuck.

Because here is something nobody predicted, or at least nobody I was paying attention to predicted: the most boring category of software in the enterprise stack, the one developers have complained about with more sustained passion than almost any other, is turning out to be the load-bearing infrastructure for the entire AI agent era. And I don't mean in a hand-wavy, "everything is connected" sense. I mean specifically, structurally, in a way you can trace through a series of events in early 2026 so cleanly it almost feels scripted.

In March, Karri Saarinen, the CEO and co-founder of Linear, published an open letter declaring, in so many words, that issue tracking is dead.2 His argument was clean and, honestly, hard to argue with on its own terms. Issue trackers were built for a world where the bottleneck was coordination between humans: someone scopes the work, someone else picks it up, the system tracks the handoff. When AI agents can interpret context directly, the translation step (human observes reality, human compresses reality into a ticket with a title and a description and a status) becomes friction. The ceremony shrinks. The old model of a person spending half their week turning messy reality into well-behaved records, Saarinen said, is not our end state.3

He was right about the interface. He was (I think) wrong about the infrastructure underneath it. Which is a distinction worth sitting with, because it turns out to matter for a lot more than software project management.

About a month later, OpenAI published Symphony, an open-source orchestration spec whose central idea is to use an issue tracker, specifically a Linear board, as the control plane for autonomous coding agents.4 Every open task gets a dedicated agent. Agents run continuously. Humans review the results. Some internal teams reported a fivefold increase in merged pull requests within three weeks.5 The thing Saarinen had just eulogized was now the substrate making all of it work. Not the pretty part. Not the part humans interact with. The part underneath: the records, the states, the ownership fields, the transition rules, the audit trail.

The ritual of humans grooming tickets may be dying. The state machine itself is not going anywhere.

And this, I think, is one of those accidental revelations where, if you follow the thread, you end up somewhere you didn't expect. Not about software. About what it means to build something for one purpose and discover, years or decades later, it was quietly serving another purpose the whole time. About the gap between what we think we're doing and what we're actually doing. About the strange dignity of boring work.


The genre starts with Bugzilla, in 1998.6 Terry Weissman wrote it for Mozilla to replace the in-house defect tracker Netscape had been using. It was originally written in Tcl, ported to Perl before its public release, with MySQL underneath. The first deployment hit a Mozilla server on April 6, 1998. The Bugzilla team's stated design philosophy, remarkable in its narrowness, was to focus on tracking software defects and nothing else. They could have turned it into a task management tool, a technical support system, a project management platform. They chose not to.

What came out of that narrowness, almost as a side effect, was a small set of structural primitives worth naming. Persistent state outside any single person's memory: a bug existed in a database row, not in someone's inbox. A state machine with defined transitions: NEW, ASSIGNED, RESOLVED, VERIFIED, CLOSED, plus the famously cynical WONTFIX, which remains one of the most emotionally honest software states ever invented.7 Ownership as a first-class property: the assignee field made it unambiguous whose turn it was. Defined verbs: create, comment, assign, resolve, reopen, mark duplicate, block another bug. Dependencies as queryable objects. And audit history by default, every change logged with timestamp and actor.

None of this was designed for AI. There was no AI in any meaningful sense. Weissman was solving a coordination problem for a few hundred developers scattered across time zones, working asynchronously, many of them on dial-up. The constraints he was responding to were purely human: limited memory, handoff ambiguity, accountability gaps, the basic problem of people needing to know whose turn it was and what state things were in when the information couldn't just live in someone's head.

Here is where we start to feel a kind of vertigo. Because those human constraints, the ones Weissman built his system to compensate for, turn out to be functionally identical to the constraints AI agents face in 2026.8 Agents need durable state outside the context window (the context window gets reset, summarized, truncated, lost). Agents need handoff semantics (who owns this right now, is it the agent or the human, is it blocked, is it ready for review). Agents need a coordination layer for parallel work (without one, they hold locks, throttle each other, become risk-averse, and pick small safe tasks instead of hard end-to-end work). Agents need audit history humans can review when something goes wrong. Agents need permissioned access to underlying systems.

Every single one of those requirements was already encoded. In 1998. In a bug tracker built for humans coordinating over dial-up.

The easy version of this observation is "Wow! What a coincidence!" and the easy version is wrong. Or at least incomplete. We designed agents to compensate for the same weaknesses we designed issue trackers to compensate for. Limited memory, ambiguous handoffs, no native accountability. The overlap isn't accidental in the way a meteorite hitting a particular field is accidental. It's more the way you'd discover, years later, the foundation you poured for a garage happens to support the weight of a second story, because both structures need to resist the same forces (gravity, wind, the tendency of heavy things to fall). We built agents in our own image, at least structurally, and then handed them tools shaped for our own limitations.9 Of course they fit.

Which doesn't make it less surprising. Just less random.

The commercial evolution of the ticket tells its own story about the relationship between flexibility and honesty. Jira shipped in 2002, took Bugzilla's structural model, and added everything enterprises wanted: configurable workflows, custom fields, project hierarchy, role-based permissions, integration with everything. Jira became universal in part because it was infinitely flexible. Each company could shape it to its specific organizational structure, which was Jira's commercial genius and the source of its terrible reputation. Every Jira deployment became its own local maze. The underlying primitives were sound, but the configuration surface was so large the tool absorbed every organizational dysfunction around it.10

Linear arrived in 2019 with a different philosophy. Saarinen, then a principal designer at Airbnb, was tired of how bad project management tools were. Linear was built around a single opinionated model: issues live inside cycles, cycles ladder up to projects, the customization surface is deliberately narrow. You don't configure Linear to match your org chart. You change your workflow to match Linear.

I've spent real time inside both, and the speed difference alone explains most of the switching. But the deeper thing, the thing no one would have called strategic at the time, is what happened to the data. When people hate a tool, they work around it. They leave fields blank. They put important decisions in Slack. They use fake statuses. They create tickets after the work is done, to satisfy a process nobody believes in. The tracker stops reflecting reality and starts reflecting the minimum viable compliance with a system everyone resents.

When people use a tool voluntarily, more of the real work migrates into the system. The state gets cleaner. The ownership stays current. The descriptions are better. The dependencies reflect what's actually happening rather than what someone guessed three sprints ago.

Linear was a UX win. The UX win became a data win, because people used it honestly. And the data win turns out to matter enormously once agents arrive, because an agent doesn't care whether your project management tool feels elegant. It cares whether the state inside it is reliable enough to act on.11

This is, I think, a principle with applications well beyond software. The quality of any system of record depends on whether the people feeding it information believe the system deserves their honesty. A doctor's notes are only as good as the doctor's faith the notes will be read. A company's financial records are only as accurate as the accountants' belief someone will care about the accuracy. A relationship is only as healthy as each person's willingness to say the uncomfortable thing rather than maintain a pleasant fiction. The incentive to tell the truth is always downstream of the experience of being heard.12

I know this all sounds obvious, but consider how rarely we design for it. We design for compliance, for reporting, for auditability, for coverage. We build systems to capture information and then wonder why the information is bad. The answer is almost always the same: the people closest to reality decided, at some point, it wasn't worth the effort to be precise. Not because they're lazy. Because the system made precision feel pointless.

The Symphony spec is worth reading even if you never plan to run autonomous coding agents, because it makes the "substrate hypothesis" concrete in a way no amount of theorizing can. Symphony watches a Linear board, creates a dedicated workspace for every issue, runs agents continuously against those workspaces, and lets humans review the results. It defines polling, per-issue workspaces, active and terminal states, retries, observability, concurrency limits, and handoff states.13 The issue tracker, in Symphony's world, did not die. It got promoted. It stopped being the only user interface for human coordination and became the data layer for agent coordination.

And once you see the pattern, it starts showing up everywhere.

CRMs are issue trackers for revenue. Salesforce and HubSpot have accounts, contacts, opportunities, owners, stages, next steps, history, and permissions. A deal moves from prospecting to qualification to proposal to negotiation to closed-won or closed-lost. An agent can research an account, draft a follow-up, update fields, flag risk, prepare the next meeting, ask for human approval before sending something external. The CRM is already a durable state layer; it just doesn't know it yet.14

Service desks are issue trackers for customer problems. ERPs are issue trackers for business process. Calendars are issue trackers for time. Source control is an issue tracker for code change. HR information systems are issue trackers for employees and roles. Procurement tools are issue trackers for spend. The pattern repeats: if a system was built to coordinate people asynchronously around important work, it probably has the bones of an agent substrate.

Even the weaker candidates are revealing too. Email has state and history and permissions, but the verbs are conversational rather than structural. There is no "assign" or "resolve" in email, just reply. Slack and Teams have even less structure; the state of a thread is the messages in it, which is a transcript rather than a database. Documentation tools sit in the middle: they have versioning and permissions, but ownership is fuzzy and the verbs (edit, comment, share) are too weak to serve as a control plane.15

You can run a diagnostic on any tool in your stack with five questions.

  • Does it have records or just content?
  • Does it have a state machine or just labels?
  • Is ownership a field or an implication?
  • Are the verbs structural or conversational?
  • Is the history queryable or just visible?

Tools scoring well on all five are about to become much more strategic than they look. Tools scoring poorly become context sources at best, and in many cases become places where someone else builds the real substrate around them.

I find myself thinking about this framework when I look at my own work, and not just the software parts. The question "is the state clean" applies to team processes, to personal systems, to relationships. Where is the real state of your project? In the tracker, or in three people's heads and a Slack thread from two weeks ago nobody bookmarked? Where is the real state of your household finances? In the spreadsheet, or in a series of approximations everyone involved quietly agrees to treat as facts?16 The difference between the stated system and the actual system is always a measure of how much friction the official system introduces, and how much faith its users have in its value.

There is a larger thing happening here, and it makes me uncomfortable in a way I haven't fully sorted out. The Atlassian moves of the past year read differently through this lens. They shipped their Remote MCP Server, branded Rovo, exposing Jira and Confluence to any MCP-compatible client, with Anthropic as the first official partner.17 A multi-year sponsorship deal with the Williams Formula 1 team, co-branded with Anthropic. And then, in late April 2026, rumors (unconfirmed, no SEC filing, treat as speculation) Anthropic might acquire Atlassian at a premium. The rumor itself is a tell, regardless of whether the deal materializes: a few years ago, "frontier AI lab buys the issue tracker company" would have sounded absurd. Now the logic is obvious enough people take it seriously.

The issue tracker is no longer just a ticketing product. It's a map of how work happens inside the enterprise. It knows the projects, the dependencies, the owners, the history, the approvals, which work matters and which work is blocked. That is exactly the kind of context agents need to operate.

The real headline isn't "Anthropic might buy Atlassian." The real headline is: in 2026, it became reasonable to model issue trackers as strategic AI assets. That repricing is the news, regardless of what any specific deal does or doesn't do.

And here's uncomfortable part. This means the boring work was always the valuable work. The years of grooming backlogs, maintaining clean data, filling in fields nobody seemed to read, updating statuses nobody seemed to check. The twenty-five years I and millions of others spent performing what felt, on most days, more ritual than productive effort. All of it was building a substrate whose value was invisible until something new came along to consume it. We were, without knowing it, constructing the operating layer for a technology nobody had built yet.

There's a version of this where you feel vindicated and a version where you feel used.18 I think I feel both, which is probably the honest answer. The person who spent a decade keeping Jira fields accurate wasn't wasting their time. They were building infrastructure. They just didn't know for whom.

I think back to Weissman, writing Tcl in 1998 for a few hundred Mozilla contributors, deciding a bug needed to have an owner and a state and a history. Building the most emotionally honest software state ever invented (WONTFIX) because sometimes the right answer is to acknowledge the problem exists and declare, formally, you're not going to fix it. The narrowness of the design was the point. He wasn't trying to build a platform for everything. He was trying to make it unambiguous whose turn it was and what state things were in.

Twenty-eight years later, an autonomous coding agent reads a ticket from a Linear board, spins up a workspace, writes code, runs tests, prepares a pull request, and writes back to the ticket with what happened. The agent doesn't know about Weissman. It doesn't care about the ceremony, the fluorescent-lit afternoons of status updates, the decades of developers complaining about the tool. It just needs the state to be clean, the ownership to be clear, the transitions to be defined, and the history to be there.

My coffee is cold again. The ticket updated on Wednesday has moved to "Done," touched now by a fifth person, its comment thread a little longer. Outside my window the afternoon light is doing something I'll forget by tomorrow. The ticket won't forget anything. Tickets never do.

1. There's a whole sub-genre of organizational honesty research about how comment threads on work tickets tell truer stories than official retrospectives. The official version gets smoothed. The ticket thread preserves the hesitations, the reversals, the moments where someone said "wait, are we sure about this?" and nobody answered for three days. ↩︎

2. The phrase was "issue tracking is dead," published on Linear's blog on March 24, 2026, and accompanied by the launch of Linear Agent, a built-in AI agent with skills and automations. Saarinen's framing was more sophisticated than the headline suggested; he wasn't saying issues themselves would disappear, but the human ceremony around managing them was contracting. ↩︎

3. For context: Saarinen reported coding agents were already installed in over 75% of Linear's enterprise workspaces, and agent work volume had grown fivefold in three months. By early 2026, roughly one in four new issues in those workspaces were being created by agents, not humans. ↩︎

4. Symphony was released under the Apache 2.0 license. The reference implementation is written in Elixir, leveraging the BEAM virtual machine's concurrency primitives for managing hundreds of concurrent agents. OpenAI explicitly described it as a reference implementation and spec, not a standalone product they plan to maintain, which is itself a telling statement about the future of open-source: "software as a spec" rather than software as a maintained codebase. ↩︎

5. This number deserves the usual caveats about self-reported metrics from a company with an incentive to make its tools sound good. No baseline was published. "Landed pull requests" is not the same as "shipped value." But even if you discount the number heavily, the directional signal is clear: routing work through the tracker rather than through individual sessions changed the output meaningfully. ↩︎

6. I keep wanting to write "it all starts with Bugzilla," but the truth is bug tracking existed before Bugzilla; Netscape's internal system preceded it, and people were tracking defects in spreadsheets and email lists before anyone formalized the practice. What Bugzilla did was codify a particular structural shape, the one every subsequent tracker has copied, and release it as open source. ↩︎

7. WONTFIX deserves its own essay. Most software states are aspirational or procedural: "in progress" means someone is working on it, "resolved" means someone fixed it. WONTFIX is neither. It says: we see the problem, we acknowledge the problem, and we have decided, deliberately, not to address it. There's something clarifying about a system giving you the language to make a decision rather than to leave it ambiguous. Most organizations don't have a WONTFIX for their real problems. They have "we'll get to it eventually," which is WONTFIX without the honesty. ↩︎

8. The technical framing here is borrowed from Cursor's research on scaling autonomous coding agents, published in early 2026, which documented what happens when you run hundreds of agents on large coding projects without proper coordination infrastructure. Short version: they become conservative, hoard resources, avoid hard problems, and pick small safe tasks. Which, if you think about it, is also what happens to humans in poorly coordinated organizations. ↩︎

9. There's a philosophical thread here about convergent design worth pulling on. In biology, eyes evolved independently in multiple lineages because vision solves the same environmental problem regardless of which organism is trying to solve it. In software, state machines and ownership fields keep re-emerging because asynchronous coordination imposes the same structural requirements regardless of whether the coordinators are humans or agents. The substrate fits because the problem fits. ↩︎

10. I once spent a full day helping a team untangle a Jira workflow where a ticket could be in a state called "Ready for Development" and simultaneously in a state called "Not Ready," due to a custom workflow someone had built three years earlier for a specific sprint process that no longer existed. The person who built it had left the company. Nobody remembered why both states existed. This is Jira's gift and curse in miniature: it's flexible enough to encode your organization's dysfunction permanently. ↩︎

11. An IDC study from 2025 found 88% of AI pilot projects never reached production. One of the most common failure modes was investigability: the agent did something wrong and nobody could reconstruct what it saw, what it decided, what it changed. Audit history, the most boring feature of any tracker, turns out to be the difference between "we can fix this" and "we have to shut this down." ↩︎

12. I realize this sounds grandiose for a point about Jira fields. But I've watched teams go from treating their tracker as a compliance burden (filling in fields because someone made them) to treating it as a genuine source of truth (filling in fields because the information is useful), and the difference in output quality is staggering. It's the same people, the same work, the same tool. The only change is whether anyone believes the data matters. ↩︎

13. The spec itself is model-agnostic; OpenAI asked Codex to implement Symphony in TypeScript, Go, Rust, Java, and Python to refine the specification. The Elixir implementation is the reference, but the spec is the real product. Zach Brock, a member of technical staff at OpenAI, described this as "software as a spec" rather than software as a maintained codebase, which is worth chewing on as a model for how open-source might work in an era when code generation is cheap. ↩︎

14. I work in software and the CRM-as-agent-substrate idea is not abstract for me or for any company currently building an AI initiative where the entire architecture depends on the quality of the data already living in our case systems. Clean case records, clear ownership, well-defined states. The agent isn't replacing the system of record; it's consuming it. ↩︎

15. Spreadsheets are the strangest middle case. They have rows, columns, formulas, structure. But the schema is user-defined and often implicit. A well-designed spreadsheet can be incredibly structured; a personal scratchpad spreadsheet is a maze of merged cells and color-coded meaning no one else can decode. The agent has to infer the schema before it can act, which is a different problem than operating within a schema someone already defined. ↩︎

16. I'm thinking here of any household, where a couple maintains a shared budget spreadsheet both treat as authoritative despite the fact it contains at least three categories where the numbers are, let's say, directionally correct. The gap between the spreadsheet and reality is a measure of how much effort it would take to be precise and how much either believes the precision would change our behavior. Usually: not much, on both counts. ↩︎

17. The MCP server is now generally available as of February 2026, with OAuth 2.1, granular scopes, IP allowlisting, and admin controls. The initial beta launched in mid-2025. In a detail worth noting, the Atlassian blog post about the launch was co-quoted by their CTO and a product lead at Anthropic, which is an unusual level of coordination for a third-party integration announcement. ↩︎

18. There's a more cynical reading available: the value of your meticulous data entry was latent until a technology arrived to extract it, and you won't see any of the upside. The person who maintained clean Jira fields for a decade helped build the training ground for the system now automating their work. I don't think the cynical reading is the whole story, but I don't think it's wrong either. The honest answer is probably somewhere between vindication and exploitation, which is where honest answers about labor and technology usually land. ↩︎