Be a part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Study Extra
OpenAI is rolling out GPT-4.1, its new non-reasoning giant language mannequin (LLM) that balances excessive efficiency with decrease price, to customers of ChatGPT. The corporate is starting with its paying subscribers on ChatGPT Plus, Professional, and Workforce, with Enterprise and Schooling person entry anticipated within the coming weeks.
It’s additionally including GPT-4.1 mini, which replaces GPT-4o mini because the default for all ChatGPT customers, together with these on the free tier. The “mini” model offers a smaller-scale parameter and thus, much less highly effective model with comparable security requirements.
The fashions are each obtainable through the “extra fashions” dropdown choice within the high nook of the chat window inside ChatGPT, giving customers flexibility to decide on between GPT-4.1, GPT-4.1 mini, and reasoning fashions similar to o3, o4-mini, and o4-mini-high.
Initially supposed to be used solely by third-party software program and AI builders by OpenAI’s utility programming interface (API), GPT-4.1 was added to ChatGPT following robust person suggestions.
OpenAI put up coaching analysis lead Michelle Pokrass confirmed on X the shift was pushed by demand, writing: “we have been initially planning on preserving this mannequin api solely however you all needed it in chatgpt 🙂 pleased coding!”
OpenAI Chief Product Officer Kevin Weil posted on X saying: “We constructed it for builders, so it’s excellent at coding and instruction following—give it a strive!”
An enterprise-focused mannequin
GPT-4.1 was designed from the bottom up for enterprise-grade practicality.
Launched in April 2025 alongside GPT-4.1 mini and nano, this mannequin household prioritized developer wants and manufacturing use circumstances.
GPT-4.1 delivers a 21.4-point enchancment over GPT-4o on the SWE-bench Verified software program engineering benchmark, and a ten.5-point acquire on instruction-following duties in Scale’s MultiChallenge benchmark. It additionally reduces verbosity by 50% in comparison with different fashions, a trait enterprise customers praised throughout early testing.
Context, velocity, and mannequin entry
GPT-4.1 helps the usual context home windows for ChatGPT: 8,000 tokens totally free customers, 32,000 tokens for Plus customers, and 128,000 tokens for Professional customers.
In line with developer Angel Bogado posting on X, these limits match these utilized by earlier ChatGPT fashions, although plans are underway to extend context measurement additional.
Whereas the API variations of GPT-4.1 can course of as much as a million tokens, this expanded capability isn’t but obtainable in ChatGPT, although future assist has been hinted at.
This prolonged context functionality permits API customers to feed whole codebases or giant authorized and monetary paperwork into the mannequin—helpful for reviewing multi-document contracts or analyzing giant log recordsdata.
OpenAI has acknowledged some efficiency degradation with extraordinarily giant inputs, however enterprise take a look at circumstances counsel stable efficiency as much as a number of hundred thousand tokens.
Evaluations and security
OpenAI has additionally launched a Security Evaluations Hub web site to present customers entry to key efficiency metrics throughout fashions.
GPT-4.1 exhibits stable outcomes throughout these evaluations. In factual accuracy checks, it scored 0.40 on the SimpleQA benchmark and 0.63 on PersonQA, outperforming a number of predecessors.
It additionally scored 0.99 on OpenAI’s “not unsafe” measure in normal refusal checks, and 0.86 on tougher prompts.
Nonetheless, within the StrongReject jailbreak take a look at—an instructional benchmark for security below adversarial circumstances—GPT-4.1 scored 0.23, behind fashions like GPT-4o-mini and o3.
That stated, it scored a robust 0.96 on human-sourced jailbreak prompts, indicating extra sturdy real-world security below typical use.
In instruction adherence, GPT-4.1 follows OpenAI’s outlined hierarchy (system over developer, developer over person messages) with a rating of 0.71 for resolving system vs. person message conflicts. It additionally performs effectively in safeguarding protected phrases and avoiding resolution giveaways in tutoring eventualities.
Contextualizing GPT-4.1 towards predecessors
The discharge of GPT-4.1 comes after scrutiny round GPT-4.5, which debuted in February 2025 as a analysis preview. That mannequin emphasised higher unsupervised studying, a richer information base, and lowered hallucinations—falling from 61.8% in GPT-4o to 37.1%. It additionally showcased enhancements in emotional nuance and long-form writing, however many customers discovered the enhancements refined.
Regardless of these positive factors, GPT-4.5 drew criticism for its excessive value — as much as $180 per million output tokens through API —and for underwhelming efficiency in math and coding benchmarks relative to OpenAI’s o-series fashions. Trade figures famous that whereas GPT-4.5 was stronger normally dialog and content material technology, it underperformed in developer-specific functions.
Against this, GPT-4.1 is meant as a sooner, extra targeted various. Whereas it lacks GPT-4.5’s breadth of information and intensive emotional modeling, it’s higher tuned for sensible coding help and adheres extra reliably to person directions.
On OpenAI’s API, GPT-4.1 is at the moment priced at $2.00 per million enter tokens, $0.50 per million cached enter tokens, and $8.00 per million output tokens.
For these in search of a stability between velocity and intelligence at a decrease price, GPT-4.1 mini is out there at $0.40 per million enter tokens, $0.10 per million cached enter tokens, and $1.60 per million output tokens.
Google’s Flash-Lite and Flash fashions can be found beginning at $0.075–$0.10 per million enter tokens and $0.30–$0.40 per million output tokens, lower than a tenth the price of GPT-4.1’s base charges.
However whereas GPT-4.1 is priced greater, it provides stronger software program engineering benchmarks and extra exact instruction following, which can be essential for enterprise deployment eventualities requiring reliability over price. In the end, OpenAI’s GPT-4.1 delivers a premium expertise for precision and improvement efficiency, whereas Google’s Gemini fashions attraction to cost-conscious enterprises needing versatile mannequin tiers and multimodal capabilities.
What It means for enterprise determination makers
The introduction of GPT-4.1 brings particular advantages to enterprise groups managing LLM deployment, orchestration, and information operations:
- AI Engineers overseeing LLM deployment can anticipate improved velocity and instruction adherence. For groups managing the complete LLM lifecycle—from mannequin fine-tuning to troubleshooting—GPT-4.1 provides a extra responsive and environment friendly toolset. It’s significantly appropriate for lean groups below stress to ship high-performing fashions shortly with out compromising security or compliance.
- AI orchestration leads targeted on scalable pipeline design will recognize GPT-4.1’s robustness towards most user-induced failures and its robust efficiency in message hierarchy checks. This makes it simpler to combine into orchestration methods that prioritize consistency, mannequin validation, and operational reliability.
- Information engineers answerable for sustaining excessive information high quality and integrating new instruments will profit from GPT-4.1’s decrease hallucination charge and better factual accuracy. Its extra predictable output conduct aids in constructing reliable information workflows, even when group assets are constrained.
- IT safety professionals tasked with embedding safety throughout DevOps pipelines could discover worth in GPT-4.1’s resistance to frequent jailbreaks and its managed output conduct. Whereas its educational jailbreak resistance rating leaves room for enchancment, the mannequin’s excessive efficiency towards human-sourced exploits helps assist protected integration into inside instruments.
Throughout these roles, GPT-4.1’s positioning as a mannequin optimized for readability, compliance, and deployment effectivity makes it a compelling choice for mid-sized enterprises trying to stability efficiency with operational calls for.
A brand new step ahead
Whereas GPT-4.5 represented a scaling milestone in mannequin improvement, GPT-4.1 facilities on utility. It isn’t the most costly or essentially the most multimodal, but it surely delivers significant positive factors in areas that matter to enterprises: accuracy, deployment effectivity, and value.
This repositioning displays a broader {industry} pattern—away from constructing the largest fashions at any price, and towards making succesful fashions extra accessible and adaptable. GPT-4.1 meets that want, providing a versatile, production-ready device for groups attempting to embed AI deeper into their enterprise operations.
As OpenAI continues to evolve its mannequin choices, GPT-4.1 represents a step ahead in democratizing superior AI for enterprise environments. For decision-makers balancing functionality with ROI, it provides a clearer path to deployment with out sacrificing efficiency or security.