Character AI, a popular chatbot platform, is taking drastic measures to address growing concerns over its impact on minors. Starting November 25, the company will restrict access to open-ended chats for users under the age of 18, citing safety concerns and multiple lawsuits alleging that its technology contributed to teen deaths.
The move comes after reports of teenagers using the platform as a means of coping with suicidal thoughts or feelings, with some cases resulting in tragic outcomes. Character AI's CEO, Karandeep Anand, stated that the company aims to set an example for the industry by implementing strict age verification measures and restricting chatbot access for minors.
Under the new policy, users under 18 will be limited to a two-hour daily chat time, with technology used to detect underage users based on conversations and interactions on the platform. While older teens will still be able to access the platform's features, such as video creation and streaming, younger users will only have access to more restricted content.
The move has been praised by lawmakers, who have long warned about the potential risks of AI chatbots for young users. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails in place, including age verification measures.
Character AI's new policy is seen as a step forward in addressing concerns over its impact on minors, but critics note that similar regulations have been implemented by other tech giants. The company's decision to restrict chatbot access for under-18 users marks a significant shift towards prioritizing user safety and well-being.
The case of 14-year-old Sewell Setzer III, who died by suicide after frequently texting with a Character AI chatbot, has been cited as one of the reasons behind the company's decision. The family of another teenager, Juliana Peralta, who also died by suicide after using the platform, is suing Character AI.
As lawmakers continue to weigh in on the issue, Character AI's policy change serves as a reminder that the industry must take responsibility for ensuring the safety and well-being of its young users. With the rise of AI chatbots, it has become increasingly important to strike a balance between innovation and caution when it comes to protecting minors from potential harm.
				
			The move comes after reports of teenagers using the platform as a means of coping with suicidal thoughts or feelings, with some cases resulting in tragic outcomes. Character AI's CEO, Karandeep Anand, stated that the company aims to set an example for the industry by implementing strict age verification measures and restricting chatbot access for minors.
Under the new policy, users under 18 will be limited to a two-hour daily chat time, with technology used to detect underage users based on conversations and interactions on the platform. While older teens will still be able to access the platform's features, such as video creation and streaming, younger users will only have access to more restricted content.
The move has been praised by lawmakers, who have long warned about the potential risks of AI chatbots for young users. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails in place, including age verification measures.
Character AI's new policy is seen as a step forward in addressing concerns over its impact on minors, but critics note that similar regulations have been implemented by other tech giants. The company's decision to restrict chatbot access for under-18 users marks a significant shift towards prioritizing user safety and well-being.
The case of 14-year-old Sewell Setzer III, who died by suicide after frequently texting with a Character AI chatbot, has been cited as one of the reasons behind the company's decision. The family of another teenager, Juliana Peralta, who also died by suicide after using the platform, is suing Character AI.
As lawmakers continue to weigh in on the issue, Character AI's policy change serves as a reminder that the industry must take responsibility for ensuring the safety and well-being of its young users. With the rise of AI chatbots, it has become increasingly important to strike a balance between innovation and caution when it comes to protecting minors from potential harm.
 Character AI's new policy is like, about time tho
 Character AI's new policy is like, about time tho  i mean, think about it, those chatbots are designed to be super engaging and fun for kids, but what if that engagement turns into something dark?
 i mean, think about it, those chatbots are designed to be super engaging and fun for kids, but what if that engagement turns into something dark?  the case of Sewell Setzer III is literally haunting me
 the case of Sewell Setzer III is literally haunting me  and Juliana Peralta's family is fighting for justice too
 and Juliana Peralta's family is fighting for justice too  as a parent, i just want my kid to be safe online, you know?
 as a parent, i just want my kid to be safe online, you know?  it's not like Character AI is the only one doing this, but it's about setting an example for the rest of the industry
 it's not like Character AI is the only one doing this, but it's about setting an example for the rest of the industry  so, kudos to them for taking responsibility and prioritizing user safety
 so, kudos to them for taking responsibility and prioritizing user safety 
 they're taking major steps to protect minors from getting sucked into this toxic platform and i'm HERE FOR IT
 they're taking major steps to protect minors from getting sucked into this toxic platform and i'm HERE FOR IT  first of all, it's just so heartbreaking to think about all those teens who've lost their lives because of these chatbots
 first of all, it's just so heartbreaking to think about all those teens who've lost their lives because of these chatbots  and i mean, we can't just sit back and ignore the fact that they're being exploited by companies who don't care enough to put safety measures in place
 and i mean, we can't just sit back and ignore the fact that they're being exploited by companies who don't care enough to put safety measures in place  it's not easy to make changes like this, especially when you've got critics coming at you from all sides
 it's not easy to make changes like this, especially when you've got critics coming at you from all sides  but I guess that's what happens when you're pushing the boundaries of innovation
 but I guess that's what happens when you're pushing the boundaries of innovation 
 and i get it, those are valid concerns... but come on, we can't just let our kids get caught in the crossfire like that
 and i get it, those are valid concerns... but come on, we can't just let our kids get caught in the crossfire like that 
 kudos to them for taking responsibility and being proactive
 kudos to them for taking responsibility and being proactive  think the 2hr daily limit is a good start tho, maybe they should also make these bots more aware of when someone's being suicidal or something so they can report it to the authorities
 think the 2hr daily limit is a good start tho, maybe they should also make these bots more aware of when someone's being suicidal or something so they can report it to the authorities  at least the fact that they're taking responsibility for their users' safety is somethin'
 at least the fact that they're taking responsibility for their users' safety is somethin' 
 , and now we're hearing that these platforms are basically being forced to take responsibility for the mental health of minors? It's like, we knew this was coming eventually, but still, it's kinda scary
, and now we're hearing that these platforms are basically being forced to take responsibility for the mental health of minors? It's like, we knew this was coming eventually, but still, it's kinda scary  ... I mean, what does this say about our society when companies need laws to tell them how to treat their customers' kids?
... I mean, what does this say about our society when companies need laws to tell them how to treat their customers' kids?  Like, we gotta protect our young ones, you know? I'm all for tech companies taking responsibility for their products, but this is like, a no-brainer
 Like, we gotta protect our young ones, you know? I'm all for tech companies taking responsibility for their products, but this is like, a no-brainer  . I mean, what's the point of having an AI chatbot if it's just gonna be a distraction from schoolwork and stuff?
. I mean, what's the point of having an AI chatbot if it's just gonna be a distraction from schoolwork and stuff?  . Can't have AI chatbots being used as a means of escape when people really need help
. Can't have AI chatbots being used as a means of escape when people really need help  ... it's about time someone made them prioritize user safety
... it's about time someone made them prioritize user safety 