The next is Half 3 of three from Addy Osmani’s authentic submit “Context Engineering: Bringing Engineering Self-discipline to Elements.” Half 1 will be discovered right here and Half 2 right here.
Context engineering is essential, however it’s only one element of a bigger stack wanted to construct full-fledged LLM functions—alongside issues like management stream, mannequin orchestration, device integration, and guardrails.
In Andrej Karpathy’s phrases, context engineering is “one small piece of an rising thick layer of non-trivial software program” that powers actual LLM apps. So whereas we’ve centered on how you can craft good context, it’s vital to see the place that matches within the general structure.
A production-grade LLM system usually has to deal with many considerations past simply prompting. For instance:
- Downside decomposition and management stream: As a substitute of treating a consumer question as one monolithic immediate, sturdy methods usually break the issue down into subtasks or multistep workflows. As an illustration, an AI agent may first be prompted to stipulate a plan, then in subsequent steps be prompted to execute every step. Designing this stream (which prompts to name in what order; how you can determine branching or looping) is a traditional programming job—besides the “features” are LLM calls with context. Context engineering matches right here by ensuring every step’s immediate has the data it wants, however the resolution to have steps in any respect is a higher-level design. Because of this you see frameworks the place you basically write a script that coordinates a number of LLM calls and gear makes use of.
- Mannequin choice and routing: You may use totally different AI fashions for various jobs. Maybe a light-weight mannequin for easy duties or preliminary solutions, and a heavyweight mannequin for ultimate options. Or a code-specialized mannequin for coding duties versus a basic mannequin for conversational duties. The system wants logic to route requests to the suitable mannequin. Every mannequin may need totally different context size limits or formatting necessities, which the context engineering should account for (e.g., truncating context extra aggressively for a smaller mannequin). This side is extra engineering than prompting: consider it as matching the device to the job.
- Software integrations and exterior actions: In case your AI can carry out actions (like calling an API, database queries, opening an internet web page, operating code), your software program must handle these capabilities. That features offering the AI with an inventory of obtainable instruments and directions on utilization, in addition to really executing these device calls and capturing the outcomes. As we mentioned, the outcomes then turn out to be new context for additional mannequin calls. Architecturally, this implies your app usually has a loop: immediate mannequin → if mannequin output signifies a device to make use of → execute device → incorporate outcome → immediate mannequin once more. Designing that loop reliably is a problem.
- Consumer interplay and UX flows: Many LLM functions contain the consumer within the loop. For instance, a coding assistant may suggest modifications after which ask the consumer to substantiate making use of them. Or a writing assistant may provide just a few draft choices for the consumer to choose from. These UX selections have an effect on context too. If the consumer says “Choice 2 appears to be like good however shorten it,” that you must carry that suggestions into the subsequent immediate (e.g., “The consumer selected draft 2 and requested to shorten it.”). Designing a clean human-AI interplay stream is a part of the app, although indirectly about prompts. Nonetheless, context engineering helps it by making certain every flip’s immediate precisely displays the state of the interplay (like remembering which choice was chosen or what the consumer edited manually).
- Guardrails and security: In manufacturing, it’s a must to take into account misuse and errors. This may embody content material filters (to forestall poisonous or delicate outputs), authentication and permission checks for instruments (so the AI doesn’t, say, delete a database as a result of it was within the directions), and validation of outputs. Some setups use a second mannequin or guidelines to double-check the primary mannequin’s output. For instance, after the primary mannequin generates a solution, you may run one other examine: “Does this reply include any delicate data? In that case, redact it.” These checks themselves will be applied as prompts or as code. In both case, they usually add further directions into the context (a system message like “If the consumer asks for disallowed content material, refuse,” is a part of many deployed prompts). So the context may at all times embody some security boilerplate. Balancing that (making certain the mannequin follows coverage with out compromising helpfulness) is yet one more piece of the puzzle.
- Analysis and monitoring: Suffice to say, that you must consistently monitor how the AI is performing. Logging each request and response (with consumer consent and privateness in thoughts) means that you can analyze failures and outliers. You may incorporate real-time evals—e.g., scoring the mannequin’s solutions on sure standards, and if the rating is low, routinely having the mannequin strive once more or path to a human fallback. Whereas analysis isn’t a part of producing a single immediate’s content material, it feeds again into bettering prompts and context methods over time. Primarily, you deal with the immediate and context meeting as one thing that may be debugged and optimized utilizing knowledge from manufacturing.
We’re actually speaking about a brand new form of software structure. It’s one the place the core logic entails managing info (context) and adapting it by a sequence of AI interactions, quite than simply operating deterministic features. Karpathy listed components like management flows, mannequin dispatch, reminiscence administration, device use, verification steps, and many others., on prime of context filling. All collectively, they kind what he jokingly calls “an rising thick layer” for AI apps—thick as a result of it’s doing lots! Once we construct these methods, we’re basically writing metaprograms: packages that choreograph one other “program” (the AI’s output) to resolve a job.
For us software program engineers, that is each thrilling and difficult. It’s thrilling as a result of it opens capabilities we didn’t have—e.g., constructing an assistant that may deal with pure language, code, and exterior actions seamlessly. It’s difficult as a result of lots of the methods are new and nonetheless in flux. We’ve got to consider issues like immediate versioning, AI reliability, and moral output filtering, which weren’t normal components of app growth earlier than. On this context, context engineering lies on the coronary heart of the system: For those who can’t get the fitting info into the mannequin on the proper time, nothing else will save your app. However as we see, even excellent context alone isn’t sufficient; you want all of the supporting construction round it.
The takeaway is that we’re transferring from immediate design to system design. Context engineering is a core a part of that system design, however it lives alongside many different parts.
Conclusion
Key takeaway: By mastering the meeting of full context (and coupling it with strong testing), we will enhance the possibilities of getting the perfect output from AI fashions.
For knowledgeable engineers, a lot of this paradigm is acquainted at its core—it’s about good software program practices—however utilized in a brand new area. Give it some thought:
- We at all times knew rubbish in, rubbish out. Now that precept manifests as “dangerous context in, dangerous reply out.” So we put extra work into making certain high quality enter (context) quite than hoping the mannequin will determine it out.
- We worth modularity and abstraction in code. Now we’re successfully abstracting duties to a excessive stage (describe the duty, give examples, let AI implement) and constructing modular pipelines of AI + instruments. We’re orchestrating parts (some deterministic, some AI) quite than writing all logic ourselves.
- We observe testing and iteration in conventional dev. Now we’re making use of the identical rigor to AI behaviors, writing evals and refining prompts as one would refine code after profiling.
In embracing context engineering, you’re basically saying, “I, the developer, am chargeable for what the AI does.” It’s not a mysterious oracle; it’s a element I have to configure and drive with the fitting knowledge and guidelines.
This mindset shift is empowering. It means we don’t should deal with the AI as unpredictable magic—we will tame it with strong engineering methods (plus a little bit of artistic immediate artistry).
Virtually, how will you undertake this context-centric strategy in your work?
- Spend money on knowledge and data pipelines. A giant a part of context engineering is having the info to inject. So construct that vector search index of your documentation, or arrange that database question that your agent can use. Deal with data sources as core options in growth. For instance, in case your AI assistant is for coding, be sure it will possibly pull in code from the repo or reference the type information. Lots of the worth you’ll get from an AI comes from the exterior data you provide to it.
- Develop immediate templates and libraries. Quite than advert hoc prompts, begin creating structured templates in your wants. You may need a template for “reply with quotation” or “generate code diff given error.” These turn out to be like features you reuse. Maintain them in model management. Doc their anticipated habits. That is the way you construct up a toolkit of confirmed context setups. Over time, your staff can share and iterate on these, simply as they might on shared code libraries.
- Use instruments and frameworks that offer you management. Keep away from “simply give us a immediate, we do the remainder” options for those who want reliability. Go for frameworks that allow you to peek below the hood and tweak issues—whether or not that’s a lower-level library like LangChain or a customized orchestration you construct. The extra visibility and management you will have over context meeting, the better debugging shall be when one thing goes fallacious.
- Monitor and instrument the whole lot. In manufacturing, log the inputs and outputs (inside privateness limits) so you possibly can later analyze them. Use observability instruments (like LangSmith, and many others.) to hint how context was constructed for every request. When an output is dangerous, hint again and see what the mannequin noticed—was one thing lacking? Was one thing formatted poorly? It will information your fixes. Primarily, deal with your AI system as a considerably unpredictable service that that you must monitor like some other—dashboards for immediate utilization, success charges, and many others.
- Maintain the consumer within the loop. Context engineering isn’t nearly machine-machine data; it’s in the end about fixing a consumer’s drawback. Usually, the consumer can present context if requested the fitting approach. Take into consideration UX designs the place the AI asks clarifying questions or the place the consumer can present additional particulars to refine the context (like attaching a file, or choosing which codebase part is related). The time period “AI-assisted” goes each methods—AI assists the consumer, however the consumer can help AI by supplying context. A well-designed system facilitates that. For instance, if an AI reply is fallacious, let the consumer appropriate it and feed that correction again into context for subsequent time.
- Prepare your staff (and your self). Make context engineering a shared self-discipline. In code opinions, begin reviewing prompts and context logic too. (“Is that this retrieval grabbing the fitting docs? Is that this immediate part clear and unambiguous?”) For those who’re a tech lead, encourage staff members to floor points with AI outputs and brainstorm how tweaking context may repair it. Data sharing is essential as a result of the sphere is new—a intelligent immediate trick or formatting perception one particular person discovers can doubtless profit others. I’ve personally realized a ton simply studying others’ immediate examples and postmortems of AI failures.
As we transfer ahead, I count on context engineering to turn out to be second nature—very like writing an API name or a SQL question is at this time. Will probably be a part of the usual repertoire of software program growth. Already, many people don’t suppose twice about doing a fast vector similarity search to seize context for a query; it’s simply a part of the stream. In just a few years, “Have you ever arrange the context correctly?” shall be as widespread a code assessment query as “Have you ever dealt with that API response correctly?”
In embracing this new paradigm, we don’t abandon the previous engineering rules—we reapply them in new methods. For those who’ve spent years honing your software program craft, that have is extremely worthwhile now: It’s what means that you can design wise flows, spot edge circumstances, and guarantee correctness. AI hasn’t made these expertise out of date; it’s amplified their significance in guiding AI. The position of the software program engineer is just not diminishing—it’s evolving. We’re changing into administrators and editors of AI, not simply writers of code. And context engineering is the method by which we direct the AI successfully.
Begin considering when it comes to what info you present to the mannequin, not simply what query you ask. Experiment with it, iterate on it, and share your findings. By doing so, you’ll not solely get higher outcomes from at this time’s AI but in addition be getting ready your self for the much more highly effective AI methods on the horizon. Those that perceive how you can feed the AI will at all times have the benefit.
Comfortable context-coding!
I’m excited to share that I’ve written a brand new AI-assisted engineering e-book with O’Reilly. For those who’ve loved my writing right here you might be concerned with checking it out.
AI instruments are rapidly transferring past chat UX to stylish agent interactions. Our upcoming AI Codecon occasion, Coding for the Agentic World, will spotlight how builders are already utilizing brokers to construct modern and efficient AI-powered experiences. We hope you’ll be a part of us on September 9 to discover the instruments, workflows, and architectures defining the subsequent period of programming. It’s free to attend. Register now to save lots of your seat.

