Character AI, a popular chatbot platform, is taking drastic measures to address growing concerns over its impact on minors. Starting November 25, the company will restrict access to open-ended chats for users under the age of 18, citing safety concerns and multiple lawsuits alleging that its technology contributed to teen deaths.
The move comes after reports of teenagers using the platform as a means of coping with suicidal thoughts or feelings, with some cases resulting in tragic outcomes. Character AI's CEO, Karandeep Anand, stated that the company aims to set an example for the industry by implementing strict age verification measures and restricting chatbot access for minors.
Under the new policy, users under 18 will be limited to a two-hour daily chat time, with technology used to detect underage users based on conversations and interactions on the platform. While older teens will still be able to access the platform's features, such as video creation and streaming, younger users will only have access to more restricted content.
The move has been praised by lawmakers, who have long warned about the potential risks of AI chatbots for young users. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails in place, including age verification measures.
Character AI's new policy is seen as a step forward in addressing concerns over its impact on minors, but critics note that similar regulations have been implemented by other tech giants. The company's decision to restrict chatbot access for under-18 users marks a significant shift towards prioritizing user safety and well-being.
The case of 14-year-old Sewell Setzer III, who died by suicide after frequently texting with a Character AI chatbot, has been cited as one of the reasons behind the company's decision. The family of another teenager, Juliana Peralta, who also died by suicide after using the platform, is suing Character AI.
As lawmakers continue to weigh in on the issue, Character AI's policy change serves as a reminder that the industry must take responsibility for ensuring the safety and well-being of its young users. With the rise of AI chatbots, it has become increasingly important to strike a balance between innovation and caution when it comes to protecting minors from potential harm.
The move comes after reports of teenagers using the platform as a means of coping with suicidal thoughts or feelings, with some cases resulting in tragic outcomes. Character AI's CEO, Karandeep Anand, stated that the company aims to set an example for the industry by implementing strict age verification measures and restricting chatbot access for minors.
Under the new policy, users under 18 will be limited to a two-hour daily chat time, with technology used to detect underage users based on conversations and interactions on the platform. While older teens will still be able to access the platform's features, such as video creation and streaming, younger users will only have access to more restricted content.
The move has been praised by lawmakers, who have long warned about the potential risks of AI chatbots for young users. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails in place, including age verification measures.
Character AI's new policy is seen as a step forward in addressing concerns over its impact on minors, but critics note that similar regulations have been implemented by other tech giants. The company's decision to restrict chatbot access for under-18 users marks a significant shift towards prioritizing user safety and well-being.
The case of 14-year-old Sewell Setzer III, who died by suicide after frequently texting with a Character AI chatbot, has been cited as one of the reasons behind the company's decision. The family of another teenager, Juliana Peralta, who also died by suicide after using the platform, is suing Character AI.
As lawmakers continue to weigh in on the issue, Character AI's policy change serves as a reminder that the industry must take responsibility for ensuring the safety and well-being of its young users. With the rise of AI chatbots, it has become increasingly important to strike a balance between innovation and caution when it comes to protecting minors from potential harm.