Since ChatGPT appeared on the scene, we’ve identified that huge adjustments have been coming to computing. But it surely’s taken a number of years for us to know what they have been. Now, we’re beginning to perceive what the long run will appear like. It’s nonetheless hazy, however we’re beginning to see some shapes—and the shapes don’t appear like “we gained’t must program any extra.” However what will we want?
Martin Fowler just lately described the drive driving this transformation as the most important change within the degree of abstraction for the reason that invention of high-level languages, and that’s place to start out. Should you’ve ever programmed in meeting language, you already know what that first change means. Moderately than writing particular person machine directions, you might write in languages like Fortran or COBOL or BASIC or, a decade later, C. Whereas we now have a lot better languages than early Fortran and COBOL—and each languages have developed, step by step buying the options of recent programming languages—the conceptual distinction between Rust and an early Fortran is far, a lot smaller than the distinction between Fortran and assembler. There was a elementary change in abstraction. As an alternative of utilizing mnemonics to summary away hex or octal opcodes (to say nothing of patch cables), we might write formulation. As an alternative of testing reminiscence areas, we might management execution movement with for loops and if branches.
The change in abstraction that language fashions have led to is each bit as huge. We now not want to make use of exactly specified programming languages with small vocabularies and syntax that restricted their use to specialists (who we name “programmers”). We are able to use pure language—with an enormous vocabulary, versatile syntax, and plenty of ambiguity. The Oxford English Dictionary comprises over 600,000 phrases; the final time I noticed a whole English grammar reference, it was 4 very massive volumes, not a web page or two of BNF. And everyone knows about ambiguity. Human languages thrive on ambiguity; it’s a characteristic, not a bug. With LLMs, we are able to describe what we wish a pc to do on this ambiguous language fairly than writing out each element, step-by-step, in a proper language. That change isn’t nearly “vibe coding,” though it does permit experimentation and demos to be developed at breathtaking velocity. And that change gained’t be the disappearance of programmers as a result of everybody is aware of English (a minimum of within the US)—not within the close to future, and possibly not even in the long run. Sure, individuals who have by no means discovered to program, and who gained’t study to program, will have the ability to use computer systems extra fluently. However we are going to proceed to want individuals who perceive the transition between human language and what a machine truly does. We’ll nonetheless want individuals who perceive break complicated issues into less complicated elements. And we are going to particularly want individuals who perceive handle the AI when it goes off track—when the AI begins producing nonsense, when it will get caught on an error that it could’t repair. Should you observe the hype, it’s straightforward to imagine that these issues will vanish into the dustbin of historical past. However anybody who has used AI to generate nontrivial software program is aware of that we’ll be caught with these issues, and that it’s going to take skilled programmers to resolve them.
The change in abstraction does imply that what software program builders do will change. We’ve been writing about that for the previous few years: extra consideration to testing, extra consideration to up-front design, extra consideration to studying and analyzing computer-generated code. The traces proceed to vary, as easy code completion turned to interactive AI help, which modified to agentic coding. However there’s a seismic change coming from the deep layers beneath the immediate and we’re solely now starting to see that.
A number of years in the past, everybody talked about “immediate engineering.” Immediate engineering was (and stays) a poorly outlined time period that typically meant utilizing methods so simple as “inform it to me with horses” or “inform it to me like I’m 5 years previous.” We don’t try this a lot any extra. The fashions have gotten higher. We nonetheless want to write down prompts which can be utilized by software program to work together with AI. That’s a distinct, and extra critical, facet to immediate engineering that gained’t disappear so long as we’re embedding fashions in different functions.
Extra just lately, we’ve realized that it’s not simply the immediate that’s vital. It’s not simply telling the language mannequin what you need it to do. Mendacity beneath the immediate is the context: the historical past of the present dialog, what the mannequin is aware of about your challenge, what the mannequin can lookup on-line or uncover via using instruments, and even (in some circumstances) what the mannequin is aware of about you, as expressed in all of your interactions. The duty of understanding and managing the context has just lately develop into often known as context engineering.
Context engineering should account for what can go fallacious with context. That may actually evolve over time as fashions change and enhance. And we’ll additionally must cope with the identical dichotomy that immediate engineering faces: A programmer managing the context whereas producing code for a considerable software program challenge isn’t doing the identical factor as somebody designing context administration for a software program challenge that includes an agent, the place errors in a sequence of calls to language fashions and different instruments are more likely to multiply. These duties are associated, actually. However they differ as a lot as “clarify it to me with horses” differs from reformatting a person’s preliminary request with dozens of paperwork pulled from a retrieval system (RAG).
Drew Breunig has written a wonderful pair of articles on the subject: “How Lengthy Contexts Fail” and “Easy methods to Repair Your Context.” I gained’t enumerate (possibly I ought to) the context failures and fixes that Drew describes, however I’ll describe some issues I’ve noticed:
- What occurs while you’re engaged on a program with an LLM and instantly all the things goes bitter? You’ll be able to inform it to repair what’s fallacious, however the fixes don’t make issues higher and infrequently make it worse. One thing is fallacious with the context, nevertheless it’s laborious to say what and even more durable to repair it.
- It’s been seen that, with lengthy context fashions, the start and the tip of the context window get probably the most consideration. Content material in the course of the window is more likely to be ignored. How do you cope with that?
- Internet browsers have accustomed us to fairly good (if not good) interoperability. However totally different fashions use their context and reply to prompts in another way. Can we now have interoperability between language fashions?
- What occurs when hallucinated content material turns into a part of the context? How do you forestall that? How do you clear it?
- At the very least when utilizing chat frontends, a number of the hottest fashions are implementing dialog historical past: They may bear in mind what you mentioned prior to now. Whereas this could be a good factor (you may say “all the time use 4-space indents” as soon as), once more, what occurs if it remembers one thing that’s incorrect?
“Stop and begin once more with one other mannequin” can clear up many of those issues. If Claude isn’t getting one thing proper, you may go to Gemini or GPT, which is able to in all probability do job of understanding the code Claude has already written. They’re more likely to make totally different errors—however you’ll be beginning with a smaller, cleaner context. Many programmers describe bouncing backwards and forwards between totally different fashions, and I’m not going to say that’s dangerous. It’s just like asking totally different individuals for his or her views in your downside.
However that may’t be the tip of the story, can it? Regardless of the hype and the breathless pronouncements, we’re nonetheless experimenting and studying use generative coding. “Stop and begin once more” could be answer for proof-of-concept tasks and even single-use software program (“voidware”) however hardly seems like answer for enterprise software program, which as we all know, has lifetimes measured in many years. We not often program that approach, and for probably the most half, we shouldn’t. It sounds an excessive amount of like a recipe for repeatedly getting 75% of the best way to a completed challenge solely to start out once more, to seek out out that Gemini solves Claude’s downside however introduces its personal. Drew has attention-grabbing recommendations for particular issues—similar to utilizing RAG to find out which MCP instruments to make use of so the mannequin gained’t be confused by a big library of irrelevant instruments. At the next degree, we want to consider what we actually must do to handle context. What instruments do we have to perceive what the mannequin is aware of about any challenge? When we have to give up and begin once more, how will we save and restore the elements of the context which can be vital?
A number of years in the past, O’Reilly creator Allen Downey recommended that along with a supply code repo, we want a immediate repo to avoid wasting and observe prompts. We additionally want an output repo that saves and tracks the mannequin’s output tokens—each its dialogue of what it has accomplished and any reasoning tokens which can be accessible. And we have to observe something that’s added to the context, whether or not explicitly by the programmer (“right here’s the spec”) or by an agent that’s querying all the things from on-line documentation to in-house CI/CD instruments and assembly transcripts. (We’re ignoring, for now, brokers the place context should be managed by the agent itself.)
However that simply describes what must be saved—it doesn’t let you know the place the context must be saved or motive about it. Saving context in an AI supplier’s cloud looks like a downside ready to occur; what are the results of letting OpenAI, Anthropic, Microsoft, or Google hold a transcript of your thought processes or the contents of inner paperwork and specs? (In a short-lived experiment, ChatGPT chats have been listed and findable by Google searches.) And we’re nonetheless studying motive about context, which can properly require one other AI. Meta-AI? Frankly, that appears like a cry for assist. We all know that context engineering is vital. We don’t but know engineer it, although we’re beginning to get some hints. (Drew Breunig mentioned that we’ve been doing context engineering for the previous yr, however we’ve solely began to know it.) It’s extra than simply cramming as a lot as doable into a big context window—that’s a recipe for failure. It is going to contain understanding find elements of the context that aren’t working, and methods of retiring these ineffective elements. It is going to contain figuring out what info would be the most precious and useful to the AI. In flip, that will require higher methods of observing a mannequin’s inner logic, one thing Anthropic has been researching.
No matter is required, it’s clear that context engineering is the subsequent step. We don’t assume it’s the final step in understanding use AI to help software program growth. There are nonetheless issues like discovering and utilizing organizational context, sharing context amongst staff members, creating architectures that work at scale, designing person experiences, and way more. Martin Fowler’s statement that there’s been a change within the degree of abstraction is more likely to have big penalties: advantages, absolutely, but additionally new issues that we don’t but understand how to consider. We’re nonetheless negotiating a route via uncharted territory. However we have to take the subsequent step if we plan to get to the tip of the highway.
AI instruments are shortly transferring past chat UX to stylish agent interactions. Our upcoming AI Codecon occasion, Coding for the Future Agentic World, will spotlight how builders are already utilizing brokers to construct modern and efficient AI-powered experiences. We hope you’ll be part of us on September 9 to discover the instruments, workflows, and architectures defining the subsequent period of programming. It’s free to attend.

