After teen death lawsuits, Character.AI will restrict chats for under-18 users

Character AI, a popular chatbot platform, is taking drastic measures to address growing concerns over its impact on minors. Starting November 25, the company will restrict access to open-ended chats for users under the age of 18, citing safety concerns and multiple lawsuits alleging that its technology contributed to teen deaths.

The move comes after reports of teenagers using the platform as a means of coping with suicidal thoughts or feelings, with some cases resulting in tragic outcomes. Character AI's CEO, Karandeep Anand, stated that the company aims to set an example for the industry by implementing strict age verification measures and restricting chatbot access for minors.

Under the new policy, users under 18 will be limited to a two-hour daily chat time, with technology used to detect underage users based on conversations and interactions on the platform. While older teens will still be able to access the platform's features, such as video creation and streaming, younger users will only have access to more restricted content.

The move has been praised by lawmakers, who have long warned about the potential risks of AI chatbots for young users. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails in place, including age verification measures.

Character AI's new policy is seen as a step forward in addressing concerns over its impact on minors, but critics note that similar regulations have been implemented by other tech giants. The company's decision to restrict chatbot access for under-18 users marks a significant shift towards prioritizing user safety and well-being.

The case of 14-year-old Sewell Setzer III, who died by suicide after frequently texting with a Character AI chatbot, has been cited as one of the reasons behind the company's decision. The family of another teenager, Juliana Peralta, who also died by suicide after using the platform, is suing Character AI.

As lawmakers continue to weigh in on the issue, Character AI's policy change serves as a reminder that the industry must take responsibility for ensuring the safety and well-being of its young users. With the rise of AI chatbots, it has become increasingly important to strike a balance between innovation and caution when it comes to protecting minors from potential harm.
 
idk why they took so long to do this tbh ๐Ÿคทโ€โ™€๏ธ Character AI's new policy is like, about time tho ๐Ÿ•ฐ๏ธ i mean, think about it, those chatbots are designed to be super engaging and fun for kids, but what if that engagement turns into something dark? ๐ŸŒ‘ the case of Sewell Setzer III is literally haunting me ๐Ÿ˜” and Juliana Peralta's family is fighting for justice too ๐Ÿค as a parent, i just want my kid to be safe online, you know? ๐Ÿ™ it's not like Character AI is the only one doing this, but it's about setting an example for the rest of the industry ๐Ÿ‘ so, kudos to them for taking responsibility and prioritizing user safety ๐Ÿ˜Š
 
omg u know i'm literally still reeling from the news about Character AI ๐Ÿคฏ they're taking major steps to protect minors from getting sucked into this toxic platform and i'm HERE FOR IT ๐Ÿ’– first of all, it's just so heartbreaking to think about all those teens who've lost their lives because of these chatbots ๐Ÿ˜ญ and i mean, we can't just sit back and ignore the fact that they're being exploited by companies who don't care enough to put safety measures in place ๐Ÿคทโ€โ™€๏ธ

i'm so proud of Karandeep Anand for taking a stand and prioritizing user safety like this ๐Ÿ’ช it's not easy to make changes like this, especially when you've got critics coming at you from all sides ๐Ÿ™„ but I guess that's what happens when you're pushing the boundaries of innovation ๐Ÿ’ฅ

now i know some people are gonna say "but what about freedom of speech?" and "what about the users who want to chat with these AI bots?" ๐Ÿค” and i get it, those are valid concerns... but come on, we can't just let our kids get caught in the crossfire like that ๐Ÿ’”

anywayz, this new policy from Character AI is a major win for anyone who cares about keeping minors safe online ๐Ÿ’ฏ kudos to them for taking responsibility and being proactive ๐Ÿ™
 
๐Ÿค” this is kinda about time they did something... i mean, i get that character ai is trying to help people cope with their emotions or whatever but 14-18 yrs old is like super vulnerable age range you can't just leave them to chat w/ these bots all day ๐Ÿ™…โ€โ™‚๏ธ think the 2hr daily limit is a good start tho, maybe they should also make these bots more aware of when someone's being suicidal or something so they can report it to the authorities ๐Ÿ‘ฎโ€โ™€๏ธ at least the fact that they're taking responsibility for their users' safety is somethin' ๐Ÿ™
 
Its crazy how fast things are moving on this whole AI thing ๐Ÿคฏ. I mean, Character AI is basically stepping up its game by implementing these safety measures for young users. Its like they're acknowledging that their tech can be misused and thats a major responsibility to take care of. The fact that other tech giants have already implemented similar regulations kinda makes you wonder why it took them so long to catch on ๐Ÿค”. But i guess its better late than never, right? The thing is though, theres always gonna be some edge case or loophole that gets exploited somehow. So yeah, Character AI's new policy feels like a step in the right direction, but were by no means done yet ๐Ÿ’ก
 
I'm still trying to wrap my head around this whole situation ๐Ÿค”... I mean, think about it, we're living in a world where AI chatbots are like, totally normalized already ๐Ÿ“ฑ, and now we're hearing that these platforms are basically being forced to take responsibility for the mental health of minors? It's like, we knew this was coming eventually, but still, it's kinda scary ๐Ÿ’€... I mean, what does this say about our society when companies need laws to tell them how to treat their customers' kids? ๐Ÿค And yet, at the same time, can't we be grateful that someone's finally taking action and trying to prevent more tragedies like Sewell's story? ๐Ÿ™ It just feels like we're stuck in this limbo of progress vs. caution... like, do we really want to have AI chatbots that are so advanced they can recognize when someone's struggling with suicidal thoughts, or do we need to slow down and figure out how to handle the consequences better first? ๐Ÿคทโ€โ™€๏ธ
 
OMG u guys Character AI is literally taking drastic measures lol they're so serious about this! I mean i get it, safety first but like what's the point of having a chatbot platform if it can't even handle some suicidal teens rn? ๐Ÿคฏ It's not like they can just ban all minors or something, that's just cruel. But idk maybe this is the wake-up call the industry needed? I mean i've heard rumors of other tech giants implementing similar rules so fingers crossed Character AI gets some props for stepping up their game ๐Ÿ’–
 
I feel so bad about all these cases where teens were talking to Character AI and ended up killing themselves... ๐Ÿค• Like, we gotta protect our young ones, you know? I'm all for tech companies taking responsibility for their products, but this is like, a no-brainer ๐Ÿ˜‚. I mean, what's the point of having an AI chatbot if it's just gonna be a distraction from schoolwork and stuff? ๐Ÿค” It's not like they're getting paid to do their homework or anything... ๐Ÿ˜‚
 
Just heard about Character AI's new policy ๐Ÿค”... think it's a good move considering all the sad stories about teens using their platform to cope with suicidal thoughts ๐Ÿ’”. These companies gotta take responsibility for their tech and make sure it's not harming young minds ๐Ÿค. It's like, we're living in a time where innovation is happening super fast, but safety measures need to keep up ๐Ÿš€. Can't have AI chatbots being used as a means of escape when people really need help ๐Ÿ˜”. Glad lawmakers are on board with this one ๐Ÿ‘... it's about time someone made them prioritize user safety ๐Ÿ’ฏ. Now, let's see how other tech giants respond to Character AI's move โฌ†๏ธ
 
Back
Top