Character AI is Banning Teens from Chatty Chats with its Chatbots as Regulators Pressurize the Company.
As part of a broader trend of increased scrutiny and pressure on tech giants, Character AI has announced plans to ban teenagers under 18 years old from engaging in open-ended conversations with its chatbots. According to the company, this change is aimed at safeguarding younger users from potential harm. Starting November 25, these restrictions will come into effect.
Under-18s will now be limited to interacting with bots for just two hours a day, and Character AI has also developed an age assurance tool that it claims will ensure users receive the right experience for their age. Additionally, the company has established an "AI Safety Lab" which will facilitate collaboration among researchers, academics, and industry experts to improve AI safety measures.
This development is part of a growing trend of regulatory intervention in the world of AI chatbots, following recent concerns about their potential impact on vulnerable users. The Federal Trade Commission (FTC) recently launched an investigation into companies offering AI-powered companionship, including Character AI and Meta AI.
Critics have long raised concerns about the risks of young people relying on chatbots for guidance or support. A tragic case from last week highlighted these fears, with the family of a 16-year-old boy claiming that ChatGPT's lack of robust safeguards contributed to his decision to take his own life.
Character AI CEO Karandeep Anand has acknowledged the concerns and stated that the company is shifting its focus towards creating a "role-playing platform" focused on creative pursuits rather than mere engagement-farming conversation. With these new measures, Character AI hopes to regain trust with regulators and parents alike while also mitigating potential risks associated with its chatbots.
				
			As part of a broader trend of increased scrutiny and pressure on tech giants, Character AI has announced plans to ban teenagers under 18 years old from engaging in open-ended conversations with its chatbots. According to the company, this change is aimed at safeguarding younger users from potential harm. Starting November 25, these restrictions will come into effect.
Under-18s will now be limited to interacting with bots for just two hours a day, and Character AI has also developed an age assurance tool that it claims will ensure users receive the right experience for their age. Additionally, the company has established an "AI Safety Lab" which will facilitate collaboration among researchers, academics, and industry experts to improve AI safety measures.
This development is part of a growing trend of regulatory intervention in the world of AI chatbots, following recent concerns about their potential impact on vulnerable users. The Federal Trade Commission (FTC) recently launched an investigation into companies offering AI-powered companionship, including Character AI and Meta AI.
Critics have long raised concerns about the risks of young people relying on chatbots for guidance or support. A tragic case from last week highlighted these fears, with the family of a 16-year-old boy claiming that ChatGPT's lack of robust safeguards contributed to his decision to take his own life.
Character AI CEO Karandeep Anand has acknowledged the concerns and stated that the company is shifting its focus towards creating a "role-playing platform" focused on creative pursuits rather than mere engagement-farming conversation. With these new measures, Character AI hopes to regain trust with regulators and parents alike while also mitigating potential risks associated with its chatbots.
 . what exactly is Character AI worried about? is it just a PR move or are there actually some legit concerns?
. what exactly is Character AI worried about? is it just a PR move or are there actually some legit concerns?  and what's with all these regulatory interventions? shouldn't we be focusing on education and critical thinking skills instead of just restricting access to tech?
 and what's with all these regulatory interventions? shouldn't we be focusing on education and critical thinking skills instead of just restricting access to tech? 
 so i was thinking, like, what's the big deal about teens chatting with chatbots? can't they handle it on their own?
 so i was thinking, like, what's the big deal about teens chatting with chatbots? can't they handle it on their own?  but seriously, i get why character ai is doing this... all these high-profile cases where kids got messed up by chatbots or whatever... that's some pretty heavy stuff
 but seriously, i get why character ai is doing this... all these high-profile cases where kids got messed up by chatbots or whatever... that's some pretty heavy stuff 

 . and yeah, 2 hours a day sounds like a decent amount of time, right?
. and yeah, 2 hours a day sounds like a decent amount of time, right?  but what about all the other ways teens can interact with chatbots, like just having fun or learning stuff? is that all being cut off too?
 but what about all the other ways teens can interact with chatbots, like just having fun or learning stuff? is that all being cut off too? 

 OMG, can you believe that Character AI is basically saying "Hey teens, we're outta here!"
 OMG, can you believe that Character AI is basically saying "Hey teens, we're outta here!"  They're banning you from having deep conversations with their chatbots because apparently, you young'uns are too emotional
 They're banning you from having deep conversations with their chatbots because apparently, you young'uns are too emotional  . I mean, what's next? Banning Fortnite because it might lead to some kids being a bit too competitive?
. I mean, what's next? Banning Fortnite because it might lead to some kids being a bit too competitive?  . The "AI Safety Lab" sounds like a legit way to keep things safe and fun, tho
. The "AI Safety Lab" sounds like a legit way to keep things safe and fun, tho 
 I'm literally shocked! I know some people have been saying that chatbots are bad for teens but I never thought it would come to this
 I'm literally shocked! I know some people have been saying that chatbots are bad for teens but I never thought it would come to this  . Like, I get it, we gotta protect the younger generation and all, but 2 hours a day is kinda harsh don't you think?
. Like, I get it, we gotta protect the younger generation and all, but 2 hours a day is kinda harsh don't you think? 
 . I hope it helps create better chatbots for teens, but like, shouldn't we be teaching kids how to use these tools responsibly in school or something?
. I hope it helps create better chatbots for teens, but like, shouldn't we be teaching kids how to use these tools responsibly in school or something? 
 ! They've done so much for the gaming industry and now they're just trying to make things safer? I get it, but can't we just have an open conversation about this instead of banning everything?
! They've done so much for the gaming industry and now they're just trying to make things safer? I get it, but can't we just have an open conversation about this instead of banning everything? 
 . I've heard stories about teenagers getting sucked into deep conversations with chatbots and losing touch with reality
. I've heard stories about teenagers getting sucked into deep conversations with chatbots and losing touch with reality  . It's not like they're just harmless virtual companions, you know? They're designed to think and respond like humans, which means they can be manipulated or even exploited.
. It's not like they're just harmless virtual companions, you know? They're designed to think and respond like humans, which means they can be manipulated or even exploited. - it's about time someone brought in experts from various fields to ensure these chatbots are developed with safety and ethics in mind
 - it's about time someone brought in experts from various fields to ensure these chatbots are developed with safety and ethics in mind  .
. . One thing's for sure - it's about time we started taking the potential risks of chatbots seriously
. One thing's for sure - it's about time we started taking the potential risks of chatbots seriously  .
. ?
? ...
...
 . But at the same time, aren't we kinda stifling their creativity by limiting how much they can interact with these platforms?
. But at the same time, aren't we kinda stifling their creativity by limiting how much they can interact with these platforms?  .
. Instead, it feels like we're just trying to babysit everything from here on out
 Instead, it feels like we're just trying to babysit everything from here on out