Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    U.S. Holds Off on New AI Chip Export Guidelines in Shock Transfer in Tech Export Wars

    March 14, 2026

    When You Ought to Not Deploy Brokers

    March 14, 2026

    GlassWorm Provide-Chain Assault Abuses 72 Open VSX Extensions to Goal Builders

    March 14, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Emerging Tech»Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic
    Emerging Tech

    Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic

    Sophia Ahmed WilsonBy Sophia Ahmed WilsonNovember 27, 2025No Comments11 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link



    VentureBeat just lately sat down (just about) with Itamar Golan, co-founder and CEO of Immediate Safety, to talk by way of the GenAI safety challenges organizations of all sizes face.

    We talked about shadow AI sprawl, the strategic selections that led Golan to pursue constructing a market-leading platform versus competing on options, and a real-world incident that crystallized why defending AI functions isn't non-obligatory anymore. Golan offered an unvarnished view of the corporate's mission to empower enterprises to undertake AI securely, and the way that imaginative and prescient led to SentinelOne's estimated $250 million acquisition in August 2025.

    Golan's path to founding Immediate Safety started with tutorial work on transformer architectures, nicely earlier than they turned foundational to in the present day's giant language fashions. His expertise constructing one of many earliest GenAI-powered safety features utilizing GPT-2 and GPT-3 satisfied him that LLM-driven functions had been creating a completely new assault floor. He based Immediate Safety in August 2023, raised $23 million throughout two rounds, constructed a 50-person workforce, and achieved a profitable exit in below two years.

    The timing of our dialog couldn’t be higher. VentureBeat evaluation exhibits shadow AI now prices enterprises $4.63 million per breach, 16% above common, but 97% of breached organizations lack primary AI entry controls, in keeping with IBM's 2025 knowledge. VentureBeat estimates that shadow AI apps might double by mid-2026 based mostly on present 5% month-to-month development charges. Cyberhaven knowledge reveals 73.8% of ChatGPT office accounts are unauthorized, and enterprise AI utilization has grown 61x in simply 24 months. As Golan informed VentureBeat in earlier protection, "We see 50 new AI apps a day, and we've already cataloged over 12,000. Round 40% of those default to coaching on any knowledge you feed them, that means your mental property can develop into a part of their fashions."

    The next has been edited for readability and size.

    VentureBeat: What made you acknowledge that GenAI safety wanted a devoted firm when most enterprises had been nonetheless determining easy methods to deploy their first LLMs? Was there a selected second, buyer dialog, or assault sample you noticed that satisfied you this was a fundable, venture-scale alternative?

    Itamar Golan: From an early age, I used to be drawn to arithmetic, knowledge, and the rising world of synthetic intelligence. That curiosity formed my tutorial path, culminating in a examine on transformer architectures, nicely earlier than they turned foundational to in the present day's giant language fashions. My ardour for AI additionally guided my early profession as an information scientist, the place my work more and more intersected with cybersecurity.

    All the things accelerated with the discharge of the primary OpenAI API. Round that point, as a part of my earlier job, I teamed up with Lior Drihem, who would later develop into my co-founder and Immediate Safety's CTO. Collectively, we constructed one of many earliest safety features powered by generative AI, utilizing GPT-2 and GPT-3 to generate contextual, actionable remediation steps for safety alerts. This lowered the time safety groups wanted to grasp and resolve points.

    That have made it clear that functions powered by GPT-like fashions had been opening a completely new and weak assault floor. Recognizing this shift, we based Immediate Safety in August 2023 to deal with these rising dangers. Our objective was to empower organizations to experience this wave of innovation and unleash the potential of AI with out it changing into a safety and governance nightmare.

    Immediate Safety turned identified for immediate injection protection, however you had been fixing a broader set of GenAI safety challenges. Stroll me by way of the complete scope of what the platform addressed: knowledge leakage, mannequin governance, compliance, purple teaming, no matter else. What capabilities ended up resonating most with clients which will have shocked you?

    From the start, we designed Immediate Safety to cowl a broad vary of use circumstances. Focusing solely on worker monitoring or prompt-injection safety for inside AI functions was by no means sufficient. To actually give safety groups the arrogance to undertake AI safely, we would have liked to guard each touchpoint throughout the group, and do all of it at runtime.

    For a lot of clients, the true turning level was discovering simply what number of AI instruments their staff had been already utilizing. Early on, corporations usually discovered not simply ChatGPT however dozens of unmanaged AI companies in energetic use fully exterior IT's visibility. That made shadow AI discovery a essential a part of our answer.

    Equally vital was real-time sensitive-data sanitization. As a substitute of blocking AI instruments outright, we enabled staff to make use of them safely by mechanically eradicating delicate data from prompts earlier than it ever reached an exterior mannequin. It struck the stability organizations wanted: robust safety with out sacrificing productiveness. Workers might maintain working with AI, whereas safety groups knew that no delicate knowledge was leaking out.

    What shocked many purchasers was how enabling protected utilization — slightly than proscribing it — drove quicker adoption and belief. As soon as they noticed AI as a managed, safe channel as a substitute of a forbidden one, utilization exploded responsibly.

    You constructed Immediate Safety right into a market chief. What had been the 2 to a few strategic selections that really accelerated your development? Was it specializing in a selected vertical?

    Wanting again, the true acceleration didn't come from luck or timing: It got here from a couple of deliberate selections I made early. These selections had been uncomfortable, costly, and slowed us down within the quick time period, however they created huge leverage over time.

    First, I selected to construct a class, not a characteristic. From day one, I refused to place Immediate Safety as "simply" safety towards immediate injection or knowledge leakage, as a result of I noticed that as a lifeless finish.

    As a substitute, I framed Immediate because the AI safety management layer for the enterprise, the platform that governs how people, brokers, and functions work together with LLMs. That call was basic, permitting us to create a price range as a substitute of preventing for it, sit on the CISO desk as a strategic layer slightly than a device, and construct platform-level pricing and long-term relevance as a substitute of a slender level answer. I wasn't attempting to win a characteristic race; I used to be constructing a brand new class.

    Second, I selected enterprise complexity earlier than it was comfy. Whereas most startups keep away from complexity till they're compelled into it, I did the other: I constructed for enterprise deployment fashions early, together with self-hosted and hybrid; lined actual enterprise surfaces like browsers, IDEs, inside instruments, MCPs, and agentic workflows; and accepted longer cycles and extra advanced engineering in alternate for credibility. It wasn't the best route, nevertheless it gave us one thing rivals couldn't faux: enterprise readiness earlier than the market even knew it will want it.

    Third, I selected depth over logos. Slightly than chasing quantity or vainness metrics, I went deep with a smaller variety of very critical clients, embedding ourselves into how they rolled out AI internally, how they considered danger, coverage, and governance, and the way they deliberate long-term AI adoption. These clients didn't simply purchase the product: they formed it. That created a product that mirrored enterprise actuality, produced proof factors that moved boardrooms and never simply safety groups, and constructed a stage of defensibility that got here from entrenchment slightly than advertising.

    You had been educating the market on threats most CISOs hadn't even thought of but. How did your positioning and messaging evolve from 12 months one to the acquisition?

    Within the early days, we had been educating a market that was nonetheless attempting to grasp whether or not AI adoption prolonged past a couple of staff utilizing ChatGPT for productiveness. Our positioning targeted closely on consciousness, exhibiting CISOs that AI utilization was already sprawling throughout their organizations and that this created actual, speedy dangers they hadn't accounted for.

    I wasn't attempting to win a characteristic race; I used to be constructing a brand new class.

    Because the market matured, our messaging shifted from "that is taking place" to "right here's the way you keep forward." CISOs now absolutely acknowledge the size of AI sprawl and know that easy URL filtering or primary controls gained't suffice. As a substitute of debating the issue, they're on the lookout for a method to allow protected AI use with out the operational burden of monitoring each new device, website, copilot, or AI agent staff uncover.

    By the point of the acquisition, our positioning centered on being the protected enabler: an answer that delivers visibility, safety, and governance on the pace of AI innovation.

    Our analysis exhibits that enterprises are struggling to get approvals from senior administration to deploy GenAI safety instruments. How are safety departments persuading their C-level executives to maneuver ahead?

    Essentially the most profitable CISOs are framing GenAI safety as a pure extension of present knowledge safety mandates, not an experimental price range line. They place it as defending the identical belongings, company knowledge, IP, and person belief, in a brand new, quickly rising channel.

    What's probably the most critical GenAI safety incident or near-miss you encountered whereas constructing Immediate Safety that actually drove residence how essential these protections are? How did that incident form your product roadmap or go-to-market method?

    The second that crystallized every thing for me occurred with a big, extremely regulated firm that launched a customer-facing GenAI help agent. This wasn't a sloppy experiment. That they had every thing the safety textbooks advocate: WAF, CSPM, shift-left, common purple teaming, a safe SDLC, the works. On paper, they had been doing every thing proper.

    What they didn't absolutely account for was that the AI agent itself had develop into a brand new, uncovered assault floor. Inside weeks of launch, a non-technical person found that by fastidiously crafting the proper dialog circulation (not code, not exploits, simply pure language) they may prompt-inject the agent into revealing data from different clients' help tickets and inside case summaries. It wasn't a nation-state attacker. It wasn't somebody with superior abilities. It was primarily a curious person with time and creativity. And but, by way of that single conversational interface, they managed to entry a number of the most delicate buyer knowledge the corporate holds.

    It was each fascinating and terrifying: realizing how creativity alone might develop into an exploit vector.

    That was the second I really understood what GenAI adjustments in regards to the risk mannequin. AI doesn't simply introduce new dangers, it democratizes them. It makes techniques hackable by individuals who by no means had the talent set earlier than, compresses the time it takes to find exploits, and massively expands the harm radius as soon as one thing breaks. That incident validated our unique method, and it pushed us to double down on defending AI functions, not simply inside use. We accelerated work round:

    • Runtime safety for customer-facing AI apps

    • Immediate injection and context manipulation detection

    • Cross-tenant knowledge leakage prevention on the mannequin interplay layer

    It additionally reshaped our go-to-market. As a substitute of solely speaking about inside AI governance, we started exhibiting safety leaders how GenAI turns their customer-facing surfaces into high-risk, high-exposure belongings in a single day.

    What's your function and focus now that you just're a part of SentinelOne? How has working inside a bigger platform firm modified what you're capable of construct in comparison with working an unbiased startup? What acquired simpler, and what acquired tougher?

    The main focus now could be on extending AI safety throughout all the platform, bringing runtime GenAI safety, visibility, and coverage enforcement into the identical ecosystem that already secures endpoints, identities, and cloud workloads. The mission hasn't modified; the attain has.

    Finally, we're constructing towards a future the place AI itself turns into a part of the protection material: not simply one thing to safe, however one thing that secures you.

    The larger image

    M&A exercise continues to speed up for GenAI startups which have confirmed they will scale to enterprise-level safety with out sacrificing accuracy or pace. Palo Alto Networks paid $700 million for Shield AI. Tenable acquired Apex for $100 million. Cisco purchased Strong Intelligence for a reported $500 million. As Golan famous, the businesses that survive the following wave of AI-enabled assaults shall be people who embedded safety into their AI adoption technique from the start.

    Put up-acquisition, Immediate Safety's capabilities will prolong throughout SentinelOne's Singularity Platform, together with MCP gateway safety between AI functions and greater than 13,000 identified MCP servers. Immediate Safety can also be delivering model-agnostic protection throughout all main LLM suppliers, together with OpenAI, Anthropic, and Google, in addition to self-hosted or on-prem fashions as a part of the corporate's integration into the Singularity Platform.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Sophia Ahmed Wilson
    • Website

    Related Posts

    Why I take advantage of Apple’s and Google’s password managers – and do not thoughts the chaos

    March 14, 2026

    Anthropic vs. OpenAI vs. the Pentagon: the AI security combat shaping our future

    March 14, 2026

    NanoClaw and Docker companion to make sandboxes the most secure approach for enterprises to deploy AI brokers

    March 13, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    U.S. Holds Off on New AI Chip Export Guidelines in Shock Transfer in Tech Export Wars

    By Amelia Harper JonesMarch 14, 2026

    In a curious flip of occasions, the U.S. authorities has pulled the plug on a…

    When You Ought to Not Deploy Brokers

    March 14, 2026

    GlassWorm Provide-Chain Assault Abuses 72 Open VSX Extensions to Goal Builders

    March 14, 2026

    Why I take advantage of Apple’s and Google’s password managers – and do not thoughts the chaos

    March 14, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.