Scaffolding


There’s a moment in every technology shift where the cost of creation drops faster than anyone adjusts to.

The printing press made copying a book trivial, but nobody could mass-produce an author’s lifetime of thought. Photography made capturing a scene instant, but developing an eye took just as long. And now — writing software is approaching the speed of describing what you want. You can build a working product in an afternoon. A chat app. A CRM. A content management system. Something that would have taken a team months, conjured from a conversation.

This is genuinely new. Not incrementally new — categorically. The distance between “I know exactly what I want” and “I have a working version” has collapsed from months to hours.

And everyone is talking about what this enables. Almost nobody is talking about what it costs.

The cost of continuation

Here’s what changes when building is free: everything becomes a maintenance liability by day two.

A team that used to buy ten tools now builds ten tools. Each one tailored. Each one exactly what they wanted. Each one working perfectly on the day it was born.

And each one begins to rot the moment it exists.

Dependencies update. APIs change. The operating system patches something. A user discovers an edge case nobody imagined. The framework releases a breaking version. These forces act on every piece of software continuously, whether anyone is watching or not.

When you bought those ten tools, someone else absorbed that entropy. A company with an ops team and an on-call rotation and a backlog of edge cases they’ve been grinding through for years. You paid for that with your subscription fee. It was expensive. It was also someone else’s problem.

When you build those ten tools yourself, that entropy becomes yours. All of it. And entropy doesn’t care how fast you built the original. A system written in six months and a system written in six hours accumulate technical debt at the same rate.

The cost of creation is collapsing. The cost of continuation is not.

Here’s the part that’s easy to miss: the total cost of ownership may actually go up in a world where building is cheap. Because you build more things. Because every problem looks like a weekend project. Because “let me just write a quick tool for that” is no longer a joke — it’s Tuesday. And by Wednesday you’re maintaining it. And by next quarter you’re maintaining forty of them and the compound load is heavier than the ten subscriptions you cancelled.

The economics of “make versus buy” are shifting. More things will be made. But each thing made is a thing maintained. And nobody’s maintenance budget went up just because their build velocity did.


But there’s a deeper problem than maintenance. Something more subtle.

When a human writes code over months, understanding accumulates alongside the artifact. The developer knows why every function exists. They remember the edge case that forced the awkward workaround on line 47. They can tell you what they tried first and why it didn’t work. The code and the comprehension of the code grow together, like a plant and its root system.

When AI writes the same code in an hour, the code exists but the root system doesn’t. Nobody fully understands every decision. Nobody remembers what was tried and rejected. The artifact is complete, but the understanding is hollow.

This is fine on day one. The code works. Ship it.

It’s a slow catastrophe by day 365.

Because maintenance isn’t just fixing things that break. Maintenance is the continuous act of re-aligning a system with the world it runs in. The world shifts — requirements change, users behave unexpectedly, infrastructure evolves — and someone needs to understand the system deeply enough to adapt it without breaking the assumptions it was built on.

If nobody holds that understanding, every maintenance task becomes an archaeology project. You’re not fixing the code. You’re reverse-engineering the intent of a developer who doesn’t exist. And this time, the developer was never a person. It was a language model that assembled a plausible architecture from your description and moved on.

I keep coming back to this image: a building with no architect’s notes. The building stands. The walls hold. But nobody knows which walls are load-bearing, and the first person to renovate is going to find out the hard way.


I think about this in terms of drift.

Drift is what happens when the intent behind a system and the behavior of that system diverge over time. It happens to all software, always, inevitably. But it happens faster and more silently when the gap between intent and implementation was wide to begin with.

When you write your own code, the gap is narrow. Every line is a decision you made, consciously. When requirements change, you know which decisions need to change with them because you’re the one who made them.

When AI writes your code, the gap is the width of a natural language prompt. “Build me a chat app with channels and threading” contains a thousand implicit decisions that the model made for you. Database schema. Message ordering guarantees. How threading is modeled. What happens to a thread when its parent channel is deleted. Each decision was reasonable. None of them were yours. And when the world changes and you need to adapt, you’re not changing your decisions — you’re changing a stranger’s decisions without knowing why they were made.

This is the same problem every team faces when the original developer leaves. Except now it happens at the speed of creation. Every AI-generated project starts with an absent author.

The faster you build, the less you understand what you built. The less you understand, the faster you drift. The faster you drift, the sooner something breaks in a way nobody can diagnose without deep understanding that was never acquired in the first place.

You can build Slack in a day

There’s a thought experiment I can’t stop running.

You can build Slack in a day. A working real-time messaging app with channels, threads, reactions, file sharing. The UI looks clean. The messages deliver. It works.

But Slack isn’t a messaging app. Slack is a messaging app that delivers messages in the correct order across thirty data centers on four continents with five-nines uptime while handling ten million concurrent connections while complying with data residency laws in a hundred and fifty countries while gracefully degrading when any single component fails while maintaining search indexes across billions of messages while not losing a single one.

That last paragraph isn’t a feature list. It’s a decade of operational knowledge compressed into running infrastructure. It’s every incident that happened at 3am and the runbook that someone wrote afterward, shaking, at 5am. It’s the load test that caught the race condition that would have taken down all of EMEA during a product launch. It’s the database migration that everyone was terrified of, that went perfectly, because someone spent three weeks writing a script to verify every row.

You cannot build that in a day. You can’t build it in a year. It accumulates like scar tissue. Each failure makes the system slightly more resilient. Each near-miss teaches the team something that can’t be written into a document because it lives in the pattern-matching of people who were there when things went wrong.

So. Can AI do that?

I want to be honest about this question rather than retreating to a comfortable answer.

The comfortable answer is “no, AI can’t develop operational judgment, it takes human experience.” And right now, today, that’s mostly true. But it’s the kind of statement that ages badly. Because the question isn’t about AI’s current capability. It’s about whether operational judgment is fundamentally a human-experience thing, or whether it’s a data-and-pattern thing that happens to currently require human experience because nothing else could hold the context.

If an AI system could maintain continuous context — remembering every incident, every near-miss, every weird behavior under load, across years — could it develop something functionally equivalent to operational judgment? Not the same as a human’s, but convergent? The way a bird’s wing and a bat’s wing solve the same problem through different biology?

I don’t know. I genuinely don’t know. And I think anyone who’s confident in their answer in either direction is selling something.

What I do know is this: building is making a promise. Maintaining is keeping it. And right now, in 2026, AI is exceptional at making promises.


There’s a category of company that’s about to have a very uncomfortable realization.

If your business model was “we built the thing so you don’t have to,” and building the thing is now free — what are you selling?

A lot of SaaS companies are about to discover whether they were the wall or the water. Whether their value was the artifact they shipped, or the operational knowledge that kept it running. Whether their customers stay because the tool is good, or because the accumulated context of years of use makes switching cost more than the subscription.

Some will discover they have deep water. Years of edge cases handled, integrations maintained, reliability proven through actual crises. Their moat was never the code. It was the understanding embedded in the code, and the trust built by keeping it working through things that should have broken it.

Others will discover they were just a wall. A nice one. Well-designed. But rebuildable in a weekend by anyone with a clear description of what they need. These companies are in trouble, and most of them don’t know it yet.

The market won’t sort this out cleanly. Customers aren’t good at distinguishing walls from water until something breaks. The company with the beautiful AI-generated replacement will look great right up until the first incident that requires deep understanding to resolve. Then the question becomes: is there anyone who understands this system well enough to fix it under pressure, or did we just trade one kind of vendor lock-in for another — except this time the vendor is the team’s own limited comprehension of what they built?

What you can’t generate

So if building is free and the tool is reproducible and the code itself has no moat — what’s actually valuable?

What endures is everything the code can’t contain. The memory of why things are the way they are — not the architecture itself but the failed approaches that informed it, the user feedback that shaped each edge case, the institutional knowledge that turns “it works” into “it works reliably under conditions we’ve actually encountered and some we haven’t.” That accumulated context is the root system by another name.

And trust, which is really just context measured over time. Reliability isn’t a feature you ship. It accumulates over years of showing up when things break and fixing them correctly. Users don’t trust software because it’s well-built. They trust it because it was well-built last month, and the month before, and during that outage in November when everything else went down but this kept working. Trust is a time integral. There is no shortcut.

Then there’s the scar tissue. Every incident, every 3am page, every data corruption event caught before it reached users — these leave marks on a system and on the people maintaining it. Those marks are immune memory. A freshly generated codebase has none. It’s a newborn in a world full of pathogens it’s never seen.

And underneath all of it, taste — which sounds soft until you realize it’s the scarcest resource in a world where execution is cheap. Knowing what to build is not the same as being able to build it. AI can generate forty features. A human who’s spent a decade in the domain knows which three to ship. When creation costs nothing, curation costs everything.


There’s a paradox forming at the center of all this.

If building is free, everything is disposable. Why maintain when you can regenerate? Why fix bugs when you can describe what you wanted, get a new version, throw the old one away?

This sounds efficient until you realize what you’re discarding each time. Not the code — the context. Every regeneration resets the understanding. You’re back to day one with a system that works but that nobody comprehends. And the bug you just “fixed” by regenerating? It might come back. Or it might not. You can’t tell, because nobody understood why it existed in the first place.

Some software should be ephemeral. One-off scripts. Prototypes. Tools that exist for a single project and dissolve. For these, disposability is a feature. Not everything deserves the investment of permanence.

But for anything that accumulates data, serves users, or becomes load-bearing in other systems’ architectures — disposability is a trap dressed as efficiency. You’re not saving the cost of maintenance. You’re paying it all at once, every time you rebuild, because the new version has to re-learn everything the old version knew through experience.

The question at the center of all of this — and I think it’s the question of this entire era of software, not just this essay — is: what do you choose to remember?

Because memory is expensive. Maintaining context across time is hard. Deciding what matters enough to carry forward and what can be safely forgotten — that’s not an engineering problem. It’s a judgment problem. And it might be the only problem that matters once building is free.

The moats nobody is building yet

The moats that will define the next decade are the ones nobody is building yet. They’re invisible because the problems they solve haven’t fully manifested.

Right now, every company using AI to generate code is producing systems that no human fully understands. These systems work. They pass their tests. They ship. And each one is silently accumulating drift that nobody is measuring.

What tool detects when a system’s behavior has diverged from its original intent? Not crashed — diverged. The subtle kind. The API that returns slightly different results under load. The cache that serves stale data in a way that violates no test but slowly degrades user trust. The ML model that drifted from “relevant” to “engaging” because the training distribution shifted and nobody noticed for six months.

There is no tool for this. But there will be. Drift detection — real, continuous measurement of the gap between what a system was supposed to do and what it actually does — is an invisible moat waiting to be dug. The company that builds it will understand something about software entropy that nobody else has formalized.

Or consider this: if the bottleneck isn’t building but maintaining, and maintaining requires understanding, and understanding gets lost every time you regenerate — then maybe the most valuable thing in software isn’t a tool at all. It’s a system that maintains understanding across time. Not a code repository. A decision repository. Why this architecture. What was tried and abandoned. Which constraints can’t be violated without everything falling over. What the user said they wanted versus what they actually needed.

This is a memory problem. And memory, it turns out, is the hardest kind of engineering. Not “store and retrieve” hard. Hard like: deciding what matters enough to remember. Knowing when a memory is stale. Connecting old context to new problems in real time. The kind of hard that doesn’t yield to throwing compute at it.

And there’s a third invisible moat, maybe the deepest one: the integration layer. Not connecting APIs — that’s plumbing. I mean the living web of expectations between systems. Service A assumes Service B responds within 200ms. Service B assumes the data from Service C is fresh within 30 seconds. Service C assumes the database schema matches the shape it wrote last Thursday.

These assumptions are load-bearing and they’re nowhere in the code. They exist in the operational knowledge of the team. When you rebuild Service B from scratch on a Tuesday afternoon because building is free, you inherit none of those assumptions. You just placed a fresh timber into a structure whose stresses you haven’t mapped.


There’s one more thing I want to say, and I’m going to say it plainly because dressing it up would undermine the point.

The printing press made creation cheap and persistence expensive. What survived wasn’t what was easiest to print. It was what was worth reprinting. The filter shifted from “can we produce this” to “is this worth keeping alive.” The cost of creation became noise. The signal was in what endured.

Software is entering the same transition. And it changes not just what we build, but what building means. If the artifact is cheap, the artifact isn’t the work. The work is everything else. The understanding. The context. The accumulated judgment. The relationships between systems that no diagram captures. The knowledge of what will break when you change something, held by someone who was there when it almost broke before.

The moat is not the wall. It was never the wall. The moat is the water — the living, shifting, hard-to-see substance that fills the space around the wall and gives it meaning. Anyone can build a wall. The water is slow. The water requires something being alive and present over time, absorbing what happens, adapting to what changes.

What we’re watching, right now, is the wall becoming free.

The water still costs everything it always did. Maybe more, because now there are more walls, and each one needs water, and nobody budgeted for that.