A number of years into the AI shift, the hole between engineers just isn’t expertise. It’s coordination: shared norms and a shared language for the way AI suits into on a regular basis engineering work. Some groups are already getting actual worth. They’ve moved past one-off experiments and began constructing repeatable methods of working with AI. Others haven’t, even when the motivation is there. The reason being usually easy: The price of orientation has exploded. The panorama is saturated with instruments and recommendation, and it’s arduous to know what issues, the place to begin, and what “good” seems like when you care about manufacturing realities.
The lacking map
What’s lacking is a shared reference mannequin. Not one other instrument. A map. Which engineering actions can AI responsibly help? What does high quality imply for these outputs? What adjustments when a part of the workflow turns into probabilistic? And what guardrails maintain integration secure, observable, and accountable? With out that map, it’s simple to drown in novelty, and simple to confuse widespread experimentation with dependable integration. Groups with the least time, price range, and native help pay the very best worth, and the hole compounds.
That hole is now seen on the organizational degree. Extra organizations try to show AI into enterprise worth, and the distinction between hype and integration is exhibiting up in apply. It’s simple to ship spectacular demos. It’s a lot tougher to make AI-assisted work dependable beneath real-world constraints: measurable high quality, controllable failure modes, clear information boundaries, operational possession, and predictable value and latency. That is the place engineering self-discipline issues most. AI doesn’t take away the necessity for it; it amplifies the price of lacking it. The query is how we transfer from scattered experimentation to built-in apply with out burning cycles on instrument churn. To do this at scale, we’d like shared scaffolding: a public mannequin and shared language for what “good” seems like in AI-native engineering.
We’ve seen why this type of shared scaffolding issues earlier than. Within the early web period, promise and noise moved sooner than requirements and shared apply. What made the web sturdy was not a single vendor or methodology however a cultural infrastructure: open information sharing, international collaboration, and shared language that made practices comparable and teachable. AI-native engineering wants the identical sort of cultural infrastructure, as a result of integration solely scales when the trade can coordinate on what “good” means. AI doesn’t take away the necessity for cautious engineering. Quite the opposite, it punishes the absence of it.
A public scaffold for AI-native engineering
Within the second half of 2025, I started to note rising unease amongst engineers I labored with and associates in IT. There was a transparent sense that AI would change our work in profound methods, however far much less readability on what that really meant for an individual’s function, abilities, and day by day apply. There was no scarcity of trainings, guides, blogs, or instruments, however the extra sources appeared, the tougher it turned to guage what was related, what was helpful, and the place to start. It felt overwhelming. How are you aware which matters actually matter to you when abruptly every part is labeled AI? How do you progress from hype to helpful integration?
I used to be feeling a lot of that very same uncertainty myself. I used to be attempting to make sense of the shift too, and for some time I feel I used to be ready for a clearer construction to emerge from elsewhere. It was solely when associates began reaching out to me for assist and steering that I spotted I may need one thing significant to contribute. I don’t take into account myself an AI professional. I’m discovering my means by these adjustments similar to many different engineers. However over time, I had turn out to be recognized for my work in IT workforce improvement, talent and functionality frameworks, and engineering excellence and enablement. I understand how to assist folks navigate complexity in a sensible and sustainable means, and I get pleasure from bringing readability to chaos.
That’s what led me to begin engaged on the AI Flower as a pastime challenge in early October 2025, constructing on frameworks and strategies I already had expertise with.
Once I started sharing it with associates in IT to collect suggestions, I noticed how a lot it resonated. It helped them make sense of the complexity round AI, suppose extra clearly about their very own upskilling, and start shaping AI adoption methods of their very own. That’s once I realized this informal experiment held actual worth, and determined I needed to publish it so it might assist empower different engineers and IT organizations in the identical means it had helped my associates.
With the AI Flower, I’m providing a public scaffold for AI-native engineering work: a shared reference mannequin that helps engineers, groups, and organizations undertake and combine AI sustainably and reliably. It’s meant to steer and set up the dialog round AI-assisted engineering, and to ask focused suggestions on what breaks, what’s lacking, and what “good” ought to imply in actual manufacturing contexts. It’s not meant to be excellent. It’s meant to be helpful, freely obtainable, open to contribution, and formed by the strongest useful resource our trade has: collective intelligence.
Open information sharing and collaboration can’t be elective. If AI is turning into a part of how we design, construct, function, safe, and govern methods, we’d like greater than instruments and enthusiasm. Many people work on methods folks depend on every single day. When these methods fail, the impression is actual. That’s why we owe it to the individuals who depend upon these methods to do that with care, and why we received’t get there in isolation. We’d like the trade, globally, to converge on shared requirements for reliable apply.
Concerning the AI Flower
The AI Flower maps the core actions that make up engineering work throughout the principle engineering disciplines. For every exercise, it defines what attractiveness like, primarily based on practices that ought to already really feel acquainted to engineers. It then helps folks discover how AI can help these actions in apply, offering steering on how one can start utilizing AI in that work, sharing hyperlinks to helpful studying sources, and outlining the principle dangers, trade-offs, and mitigations.
However the AI panorama is altering shortly. This activity-based method helps engineers perceive how AI can help core engineering duties, the place dangers could come up, and how one can begin constructing sensible expertise. However by itself, it isn’t sufficient as a long-term mannequin for AI adoption.
As AI capabilities evolve, many engineering actions will turn out to be extra abstracted, extra automated, or absorbed into the infrastructure layer. Which means engineers might want to do greater than discover ways to use AI inside at this time’s actions. They can even must work with rising approaches similar to context engineering and agentic workflows, that are already reshaping what we take into account core engineering work. An idea I name the Ability Fossilization Mannequin captures that development. It reveals how each engineering abilities and AI-related abilities evolve over time, and the way a few of them turn out to be much less seen as work strikes to the next degree of abstraction. Collectively, the AI Flower and the Ability Fossilization Mannequin are supposed to assist engineers keep adaptable as the sphere continues to shift.
The principle goal of the AI Flower is to assist engineers discover their means by these fast adjustments and develop with them. Whereas I present content material for every part and exercise, the actual worth lies within the framework and construction itself. To turn out to be actually precious, it’s going to want the perception, care, and contribution of engineers throughout disciplines, views, and areas.
I genuinely consider the AI Flower, as an open and freely obtainable framework, can function a scaffold for that work. That is my contribution to a altering trade. However it’s going to solely be helpful—it’s going to solely “bloom”—if the group assessments it, challenges it, and improves it over time.
And if any trade can flip open critique and contribution into shared requirements at a world scale, it’s ours, isn’t it?
Be a part of me at AI Codecon to be taught extra
If the AI Flower resonates and also you need the complete walkthrough, I’ll be presenting it at O’Reilly’s upcoming AI Codecon. (Registration is free and open to all.)
In the event you’re involved about how shortly AI engineering patterns are evolving, that concern is legitimate. We’ve already seen the middle of gravity shift from advert hoc immediate work, to context engineering, to more and more agentic workflows, and there may be extra coming. A core design objective of the AI Flower is to remain steady throughout these shifts by specializing in underlying capabilities relatively than particular methods. I’ll go deeper on that stability precept, together with the Ability Fossilization mannequin, at AI Codecon as effectively.

