Skip to content

The Rise of Middle-Stage Artifacts in AI Coding

I recently came across an article in HBR attempting to make sense of how people use AI. Honestly, it felt a bit superficial—like feeding a few reports into an LLM and calling it a day. It missed the deep behavioral analysis that we really need right now.

But it did spark a conversation with a friend about what’s actually changing in the daily life of a developer. And there is a lot going on.

Middle Stage Artifact

The Decline of "Small" Open Source

One of the most interesting immediate consequences of LLMs is the potential decline of small open source projects. Nolan Lawson wrote a great piece on this recently.

Historically, small libraries were how developers made a name for themselves. You built a "slam dunk" project, it became a fundamental brick of bigger software, and you gained recognition. Many top researchers and engineers got hired through this channel.

Now? AI is reducing the need for these small libraries to zero. Why import a dependency for left-pad or blob-util when an LLM can just write the utility function for you, tailored exactly to your needs? The long-term consequences of this are unclear, but it’s a fascinating shift.

From "What" to "Why" and "How"

So, what matters for developers in the age of AI-assisted coding?

The short answer: spending developer attention on the Why, co-designing the How with AI, and letting AI handle the What (the code, data, infrastructure, and deployment).

The "co-design" part is critical. There's a trend right now called "vibe coding," where people expect a few input tokens (a vague prompt) to produce high-quality output. Sometimes you get lucky. But if you draw a chart representing the ratio of input tokens to output tokens, there's a massive discrepancy. That gap is where entropy lives—and where bugs and misunderstandings are born.

The Case for Middle-Stage Artifacts

The future lies in middle-stage artifacts. These are the things that reduce the entropy between an idea described with a few tokens and a well-executed product. They drive the "How" through co-design.

We're starting to see this idea become a product reality. Google's Antigravity (recently discussed by Simon Willison) introduces concepts like:

  • Task Lists: Structured breakdowns of what needs to happen.
  • Walkthroughs: Proof of what was done.

I've been exploring this concept myself with my gitsummary project. The idea is to create durable metadata that lives alongside the code—artifacts that describe developer intention, implementation decisions, and user-visible impact.

These artifacts aren't just documentation; they are the bridge between human intent and AI execution. They are the new source of truth.

What's next?

We are moving away from a world where the code itself is the only artifact that matters. In a world where code is cheap and abundant, the structure around that code—the plans, the task lists, the intent definitions—becomes the premium asset.

The developers who thrive won't just be the ones who can prompt well. They will be the ones who can master these middle-stage artifacts to guide AI reliably from a vague "Why" to a perfect "What".