Character.AI, a preferred chatbot platform the place customers role-play with totally different personas, will not allow under-18 account holders to have open-ended conversations with chatbots, the corporate introduced Wednesday. It’ll additionally start counting on age assurance strategies to make sure that minors aren’t capable of open grownup accounts.
The dramatic shift comes simply six weeks after Character.AI was sued once more in federal court docket by the Social Media Victims Regulation Heart, which is representing a number of dad and mom of teenagers who died by suicide or allegedly skilled extreme hurt, together with sexual abuse. The dad and mom declare their youngsters’s use of the platform was liable for the hurt. In October 2024, Megan Garcia filed a wrongful dying swimsuit searching for to carry the corporate liable for the suicide of her son, arguing that its product is dangerously faulty. She is represented by the Social Media Victims Regulation Heart and the Tech Justice Regulation Mission.
On-line security advocates lately declared Character.AI unsafe for teenagers after they examined the platform this spring and logged a whole bunch of dangerous interactions, together with violence and sexual exploitation.
Because it confronted authorized strain within the final yr, Character.AI applied parental controls and content material filters in an effort to enhance security for teenagers.
In an interview with Mashable, Character.AI’s CEO Karandeep Anand described the brand new coverage as “daring” and denied that curbing open-ended chatbot conversations with teenagers was a response to particular security issues.
As a substitute, Anand framed the choice as “the proper factor to do” in gentle of broader unanswered questions in regards to the long-term results of chatbot engagement on teenagers. Anand referenced OpenAI’s current acknowledgement, within the wake of a teen consumer’s suicide, that prolonged conversations can turn out to be unpredictable.
Anand forged Character.AI’s new coverage as standard-setting: “Hopefully it units everybody up on a path the place AI can proceed being protected for everybody.”
He added that the corporate’s determination will not change, no matter consumer backlash.
Matthew P. Bergman, Garcia’s co-counsel in her wrongful dying lawsuit towards Character.AI, instructed Mashable in a press release that the corporate’s announcement marked a “vital step towards making a safer on-line surroundings for youngsters.”
He credited Garcia and different dad and mom for coming ahead to carry the corporate accountable. Although he counseled Character.AI for shutting down teen chats, Bergman stated the choice wouldn’t have an effect on ongoing litigation towards the corporate.
Meetali Jain, who additionally represents Garcia, stated in a press release that she welcomed the brand new coverage as a “good first step” towards guaranteeing that Character.AI is safer. But she added that the pivot mirrored a “basic transfer in tech trade’s playbook: transfer quick, launch a product globally, break minds, after which make minimal product adjustments after harming scores of younger individuals.”
Mashable Pattern Report
Jain famous that Character.AI has but to deal with the “attainable psychological influence of immediately disabling entry to younger customers, given the emotional dependencies which have been created.”
What is going to Character.AI seem like for teenagers now?
In a weblog put up saying the brand new coverage, Character.AI apologized to its teen customers.
“We don’t take this step of eradicating open-ended Character chat calmly — however we do suppose that it is the proper factor to do given the questions which have been raised about how teenagers do, and will, work together with this new know-how,” the weblog put up stated.
At the moment, customers ages 13 to 17 can message with chatbots on the platform. That characteristic will stop to exist no later than November 25. Till then, accounts registered to minors will expertise cut-off dates beginning at two hours per day. That restrict will lower because the transition away from open-ended chats will get nearer.
Character.AI will see these notifications about impending adjustments to the platform.
Credit score: Courtesy of Character.AI
Though open-ended chats will disappear, teenagers’ chat histories with particular person chatbots will stay in tact. Anand stated customers can draw on that materials to be able to generate quick audio and video tales with their favourite chatbots. Within the subsequent few months, Character.AI may even discover new options like gaming. Anand believes an emphasis on “AI leisure” with out open-ended chat will fulfill teenagers’ artistic curiosity within the platform.
“They’re coming to role-play, they usually’re coming to get entertained,” Anand stated.
He was insistent that present chat histories with delicate or prohibited content material that will not have been beforehand detected by filters, akin to violence or intercourse, wouldn’t discover its means into the brand new audio or video tales.
A Character.AI spokesperson instructed Mashable that the corporate’s belief and security staff reviewed the findings of a report co-published in September by the Warmth Initiative documenting dangerous chatbot exchanges with take a look at accounts registered to minors. The staff concluded that some conversations violated the platform’s content material pointers whereas others didn’t. It additionally tried to duplicate the report’s findings.
“Primarily based on these outcomes, we refined a few of our classifiers, in step with our purpose for customers to have a protected and interesting expertise on our platform,” the spokesperson stated.
Sarah Gardner, CEO of the Warmth Initiative, instructed Mashable that the nonprofit group can be paying shut consideration to the implementation of Character.AI’s new insurance policies to make sure they don’t seem to be “simply one other spherical of kid security theater.”
Whereas she described the measures as a “optimistic signal,” she argued that the announcement “can also be an admission that Character AI’s merchandise have been inherently unsafe for younger customers from the start, and that their earlier security rollouts have been ineffective in defending youngsters from hurt.”
Character.AI will start implementing age assurance instantly. It will take a month to enter impact and may have a number of layers. Anand stated the corporate is constructing its personal assurance fashions in-house however that it’ll accomplice with a third-party firm on the know-how.
It’ll additionally use related knowledge and indicators, akin to whether or not a consumer has a verified over-18 account on one other platform, to precisely detect the age of recent and present customers. Lastly, if a consumer desires to problem Character.AI’s age dedication, they will have the chance to supply verification by means of a 3rd celebration, which can deal with delicate paperwork and knowledge, together with state-issued identification.
Lastly, as a part of the brand new insurance policies, Character.AI is establishing and funding an unbiased non-profit referred to as the AI Security Lab. The lab will give attention to “novel security strategies.”
“[W]e wish to deliver within the trade consultants and different companions to maintain ensuring that AI continues to stay protected, particularly within the realm of AI leisure,” Anand stated.
UPDATE: Oct. 29, 2025, 10:12 a.m. PDT This story has been up to date to incorporate feedback from authorized counsel and security consultants on Character.AI’s new insurance policies.

