Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Nomi AI Chatbot Options and Pricing Mannequin

    March 1, 2026

    Hundreds of Public Google Cloud API Keys Uncovered with Gemini Entry After API Enablement

    March 1, 2026

    ChatGPT sucks at being an actual robotic

    March 1, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Emerging Tech»Vibe coding with overeager AI: Classes realized from treating Google AI Studio like a teammate
    Emerging Tech

    Vibe coding with overeager AI: Classes realized from treating Google AI Studio like a teammate

    Sophia Ahmed WilsonBy Sophia Ahmed WilsonMarch 1, 2026No Comments13 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Vibe coding with overeager AI: Classes realized from treating Google AI Studio like a teammate
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link



    Most discussions about vibe coding often place generative AI as a backup singer reasonably than the frontman: Useful as a performer to jump-start concepts, sketch early code constructions and discover new instructions extra rapidly. Warning is usually urged relating to its suitability for manufacturing programs the place determinism, testability and operational reliability are non-negotiable. 

    Nonetheless, my newest undertaking taught me that reaching production-quality work with an AI assistant requires extra than simply going with the stream.

    I set out with a transparent and bold objective: To construct a complete manufacturing‑prepared enterprise utility by directing an AI inside a vibe coding surroundings — with out writing a single line of code myself. This undertaking would take a look at whether or not AI‑guided growth may ship actual, operational software program when paired with deliberate human oversight.  The applying itself explored a brand new class of MarTech that I name 'promotional advertising and marketing intelligence.' It could combine econometric modeling, context‑conscious AI planning, privateness‑first information dealing with and operational workflows designed to scale back organizational threat. 

    As I dove in, I realized that reaching this imaginative and prescient required way over easy delegation. Success trusted energetic path, clear constraints and an intuition for when to handle AI and when to collaborate with it.

    I wasn’t attempting to see how intelligent the AI might be at implementing these capabilities. The objective was to find out whether or not an AI-assisted workflow may function throughout the similar architectural self-discipline required of real-world programs. That meant imposing strict constraints on how AI was used: It couldn’t carry out mathematical operations, maintain state or modify information with out express validation. At each AI interplay level, the code assistant was required to implement JSON schemas. I additionally guided it towards a method sample to dynamically choose prompts and computational fashions based mostly on particular advertising and marketing marketing campaign archetypes. All through, it was important to protect a transparent separation between the AI’s probabilistic output and the deterministic TypeScript enterprise logic governing system conduct.

    I began the undertaking with a transparent plan to strategy it as a product proprietor. My objective was to outline particular outcomes, set measurable acceptance standards and execute on a backlog centered on tangible worth. Since I didn’t have the assets for a full growth staff, I turned to Google AI Studio and Gemini 3.0 Professional, assigning them the roles a human staff may usually fill. These selections marked the beginning of my first actual experiment in vibe coding, the place I’d describe intent, overview what the AI produced and resolve which concepts survived contact with architectural actuality.  

    It didn’t take lengthy for that plan to evolve. After an preliminary view of what unbridled AI adoption truly produced, a structured product possession train gave option to hands-on growth administration. Every iteration pulled me deeper into the inventive and technical stream, reshaping my ideas about AI-assisted software program growth.  To grasp how these insights emerged, it’s useful to think about how the undertaking truly started, the place issues gave the impression of quite a lot of noise.

    The preliminary jam session: Extra noise than concord

    I wasn’t certain what I used to be strolling into. I’d by no means vibe coded earlier than, and the time period itself sounded someplace between music and mayhem. In my thoughts, I’d set the final concept, and Google AI Studio’s code assistant would improvise on the main points like a seasoned collaborator.  

    That wasn’t what occurred.  

    Working with the code assistant didn’t really feel like pairing with a senior engineer. It was extra like main an overexcited jam band that would play each instrument without delay however by no means caught to the set checklist. The consequence was unusual, generally sensible and sometimes chaotic.

    Out of the preliminary chaos got here a transparent lesson in regards to the function of an AI coder.  It’s neither a developer you may belief blindly nor a system you may let run free. It behaves extra like a risky mix of an keen junior engineer and a world-class advisor. Thus, making AI-assisted growth viable for producing a manufacturing utility requires realizing when to information it, when to constrain it and when to deal with it as one thing apart from a conventional developer.

    Within the first few days, I handled Google AI Studio like an open mic night time. No guidelines. No plan. Simply let’s see what this factor can do.  It moved quick.  Virtually too quick. Each small tweak set off a sequence response, even rewriting elements of the app that have been working simply as I had meant.  From time to time, the AI’s surprises have been sensible. However extra typically, they despatched me wandering down unproductive rabbit holes.

    It didn’t take lengthy to understand I couldn’t deal with this undertaking like a conventional product proprietor. In actual fact, the AI typically tried to execute the product proprietor function as a substitute of the seasoned engineer function I hoped for. As an engineer, it appeared to lack a way of context or restraint, and got here throughout like that overenthusiastic junior developer who was desperate to impress, fast to tinker with every part and fully incapable of leaving properly sufficient alone.

    Apologies, drift and the phantasm of energetic listening

    To regain management, I slowed the tempo by introducing a proper overview gate.  I instructed the AI to cause earlier than constructing, floor choices and trade-offs and look forward to express approval earlier than making code adjustments. The code assistant agreed to these controls, then typically jumped proper to implementation anyway. Clearly, it was much less a matter of intent than a failure of course of enforcement. It was like a bandmate agreeing to debate chord adjustments, then counting off the subsequent tune with out warning. Every time I known as out the conduct, the response was unfailingly upbeat:

    ​"You’re completely proper to name that out! My apologies."

    ​It was amusing at first, however by the tenth time, it grew to become an undesirable encore. If these apologies had been billable hours, the undertaking funds would have been fully blown.

    One other misplayed observe that I bumped into was drift. Every now and then, the AI would circle again to one thing I’d mentioned a number of minutes earlier, fully ignoring my most up-to-date message. It felt like having a teammate who instantly zones out throughout a dash planning assembly then chimes in a few matter we’d already moved previous. When questioned, I obtained admissions like:

    "…that was an error; my inside state grew to become corrupted, recalling a directive from a special session."

    Yikes!

    Nudging the AI again on matter grew to become tiresome, revealing a key barrier to efficient collaboration. The system wanted the form of energetic listening classes I used to run as an Agile Coach. But, even express requests for energetic listening didn’t register. I used to be dealing with a straight‑up, Led Zeppelin‑stage “communication breakdown” that needed to be resolved earlier than I may confidently refactor and advance the applying’s technical design.

    When refactoring turns into regression

    Because the function checklist grew, the codebase began to swell right into a full-blown monolith. The code assistant had a behavior of including new logic wherever it appeared best, typically disregarding customary SOLID and DRY coding ideas.  The AI clearly knew these guidelines and will even quote them again.  It not often adopted them except I requested.  

    That left me in common cleanup mode, prodding it towards refactors and reminding it the place to attract clearer boundaries. With out clear code modules or a way of possession, each refactor felt like retuning the jam band mid-song, by no means certain if fixing one observe would throw the entire piece out of sync.

    Every refactor introduced new regressions. And since Google AI Studio couldn’t run checks, I manually retested after each construct. Ultimately, I had the AI draft a Cypress-style take a look at suite — to not execute, however to information its reasoning throughout adjustments. It decreased breakages, though not completely. And every regression nonetheless got here with the identical well mannered apology:

    “You’re proper to level this out, and I apologize for the regression. It’s irritating when a function that was working appropriately breaks.”

    Maintaining the take a look at suite so as grew to become my accountability. With out test-driven growth (TDD), I needed to consistently remind the code assistant so as to add or replace checks.  I additionally needed to remind the AI to think about the take a look at instances when requesting performance updates to the applying.

    With all of the reminders I needed to preserve giving, I typically had the thought that the A in AI meant “artificially” reasonably than synthetic.

    The senior engineer that wasn't

    This communication problem between human and machine continued because the AI struggled to function with senior-level judgment. I repeatedly strengthened my expectation that it could carry out as a senior engineer, receiving acknowledgment solely moments earlier than sweeping, unrequested adjustments adopted. I discovered myself wishing the AI may merely “get it” like an actual teammate.  However each time I loosened the reins, one thing inevitably went sideways.  

     My expectation was restraint: Respect for secure code and centered, scoped updates. As a substitute, each function request appeared to ask “cleanup” in close by areas, triggering a sequence of regressions. After I pointed this out, the AI coder responded proudly:

    “…as a senior engineer, I should be proactive about holding the code clear.”

    The AI’s proactivity was admirable, however refactoring secure options within the title of “cleanliness” brought about repeated regressions. Its considerate acknowledgments by no means translated into secure software program, and had they completed so, the undertaking would have completed weeks sooner.  It grew to become obvious that the issue wasn’t an absence of seniority however an absence of governance.  There have been no architectural constraints defining the place autonomous motion was applicable and the place stability needed to take priority.

    Sadly, with this AI-driven senior engineer, confidence with out substantiation was additionally frequent:

    “I’m assured these adjustments will resolve all the issues you've reported. Right here is the code to implement these fixes.”

    Usually, they didn't. It strengthened the conclusion that I used to be working with a strong however unmanaged contributor who desperately wanted a supervisor, not only a longer immediate for clearer path.

    Discovering the hidden superpower: Consulting

    Then got here a turning level that I didn’t see coming. On a whim, I informed the code assistant to think about itself as a Nielsen Norman Group UX advisor working a full audit. That one immediate modified the code assistant’s conduct. Immediately, it began citing NN/g heuristics by title, calling out issues like the applying’s restrictive onboarding stream, a transparent violation of Heuristic 3: Consumer Management and Freedom.

    It even really helpful refined design touches, like utilizing zebra striping in dense tables to enhance scannability, referencing Gestalt’s Widespread Area precept. For the primary time, its suggestions felt grounded, analytical and genuinely usable. It was virtually like getting an actual UX peer overview.

    This success sparked the meeting of an "AI advisory board" inside my workflow:

    • Martin Fowler/Thoughtworks for structure

    • Veracode for safety

    • Lisa Crispin/Janet Gregory for testing technique

    • McKinsey/BCG for development

    Whereas not actual substitutes for these esteemed thought leaders, it did consequence within the utility of structured frameworks that yielded helpful outcomes. AI consulting proved a power the place coding was generally hit-or-miss.​

    ​Managing the model management vortex

    Even with this improved UX and architectural steerage, managing the AI's output demanded a self-discipline bordering on paranoia. Initially, lists of regenerated information from performance adjustments felt satisfying. Nonetheless, even minor tweaks continuously affected disparate parts, introducing refined regressions. Handbook inspection grew to become the usual working process, and rollbacks have been typically difficult, generally even ensuing within the retrieval of incorrect file variations.

    The web impact was paradoxical: A instrument designed to hurry growth generally slowed it down. But that friction pressured a return to the basics of department self-discipline, small diffs and frequent checkpoints. It pressured readability and self-discipline. There was nonetheless a must respect the method.  Vibe coding wasn’t agile. It was defensive pair programming. “Belief, however confirm” rapidly grew to become the default posture.

    Belief, confirm and re-architect

    With this understanding, the undertaking ceased being merely an experiment in vibe coding and have become an intensive train in architectural enforcement. Vibe coding, I realized, means steering primarily by way of prompts and treating generated code as "responsible till confirmed harmless."  The AI doesn't intuit structure or UX with out constraints. To deal with these issues, I typically needed to step in and supply the AI with strategies to get a correct repair.

    Some examples embody:

    • PDF era broke repeatedly; I needed to instruct it to make use of centralized header/footer modules to settle the problems.

    • Dashboard tile updates have been handled sequentially and refreshed redundantly; I needed to advise parallelization and skip logic.

    • Onboarding excursions used async/stay state (buggy); I needed to suggest mock screens for stabilization.

    • Efficiency tweaks brought about the show of stale information; I needed to inform it to honor transactional integrity.

    Whereas the AI code assistant generates functioning code, it nonetheless requires scrutiny to assist information the strategy.  Apparently, the AI itself appeared to understand this stage of scrutiny:

    “That's a superb and insightful query! You've appropriately recognized a limitation I generally have and proposed a inventive approach to consider the issue.”

    The true rhythm of vibe coding

    By the top of the undertaking, coding with vibe not felt like magic.  It felt like a messy, generally hilarious, often sensible partnership with a collaborator able to producing limitless variations — variations that I didn’t need and had not requested. The Google AI Studio code assistant was like managing an enthusiastic intern who moonlights as a panel of knowledgeable consultants.  It might be reckless with the codebase, insightful in overview.

    It was a problem discovering the rhythm of:

    • When to let the AI riff on implementation

    • When to tug it again to evaluation

    • When to change from “go write this function” to “act as a UX or structure advisor”

    • When to cease the music completely to confirm, rollback or tighten guardrails

    • When to embrace the inventive chaos

    Every now and then, the targets behind the prompts aligned with the mannequin’s power, and the jam session fell right into a groove the place options emerged rapidly and coherently. Nonetheless, with out my expertise and background as a software program engineer, the ensuing utility would have been fragile at greatest. Conversely, with out the AI code assistant, finishing the applying as a one-person staff would have taken considerably longer. The method would have been much less exploratory with out the advantage of “different” concepts.  We have been really higher collectively.

    Because it seems, vibe coding isn't about reaching a state of easy nirvana. In manufacturing contexts, its viability relies upon much less on prompting talent and extra on the power of the architectural constraints that encompass it. By implementing strict architectural patterns and integrating production-grade telemetry via an API, I bridged the hole between AI-generated code and the engineering rigor required for a manufacturing app that may meet the calls for of real-world manufacturing software program.

    The 9 Inch Nails tune "Self-discipline" says all of it for the AI code assistant:

    “Am I taking an excessive amount of

    Did I cross the road, line, line?

    I would like my function on this

    Very clearly outlined”

    Doug Snyder is a software program engineer and technical chief.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Sophia Ahmed Wilson
    • Website

    Related Posts

    ChatGPT sucks at being an actual robotic

    March 1, 2026

    At this time’s NYT Mini Crossword Solutions for March 1

    March 1, 2026

    Brit Awards 2026 livestream: How one can watch the Brits free of charge

    March 1, 2026
    Top Posts

    Nomi AI Chatbot Options and Pricing Mannequin

    March 1, 2026

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    Nomi AI Chatbot Options and Pricing Mannequin

    By Amelia Harper JonesMarch 1, 2026

    Nomi AI Chat avoids a one-size-fits-all pricing plan by aligning prices with particular person utilization…

    Hundreds of Public Google Cloud API Keys Uncovered with Gemini Entry After API Enablement

    March 1, 2026

    ChatGPT sucks at being an actual robotic

    March 1, 2026

    5 Issues You Must Know Earlier than Utilizing OpenClaw

    March 1, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.