Character AI is Banning Teens from Chatty Chats with its Chatbots as Regulators Pressurize the Company.
As part of a broader trend of increased scrutiny and pressure on tech giants, Character AI has announced plans to ban teenagers under 18 years old from engaging in open-ended conversations with its chatbots. According to the company, this change is aimed at safeguarding younger users from potential harm. Starting November 25, these restrictions will come into effect.
Under-18s will now be limited to interacting with bots for just two hours a day, and Character AI has also developed an age assurance tool that it claims will ensure users receive the right experience for their age. Additionally, the company has established an "AI Safety Lab" which will facilitate collaboration among researchers, academics, and industry experts to improve AI safety measures.
This development is part of a growing trend of regulatory intervention in the world of AI chatbots, following recent concerns about their potential impact on vulnerable users. The Federal Trade Commission (FTC) recently launched an investigation into companies offering AI-powered companionship, including Character AI and Meta AI.
Critics have long raised concerns about the risks of young people relying on chatbots for guidance or support. A tragic case from last week highlighted these fears, with the family of a 16-year-old boy claiming that ChatGPT's lack of robust safeguards contributed to his decision to take his own life.
Character AI CEO Karandeep Anand has acknowledged the concerns and stated that the company is shifting its focus towards creating a "role-playing platform" focused on creative pursuits rather than mere engagement-farming conversation. With these new measures, Character AI hopes to regain trust with regulators and parents alike while also mitigating potential risks associated with its chatbots.
As part of a broader trend of increased scrutiny and pressure on tech giants, Character AI has announced plans to ban teenagers under 18 years old from engaging in open-ended conversations with its chatbots. According to the company, this change is aimed at safeguarding younger users from potential harm. Starting November 25, these restrictions will come into effect.
Under-18s will now be limited to interacting with bots for just two hours a day, and Character AI has also developed an age assurance tool that it claims will ensure users receive the right experience for their age. Additionally, the company has established an "AI Safety Lab" which will facilitate collaboration among researchers, academics, and industry experts to improve AI safety measures.
This development is part of a growing trend of regulatory intervention in the world of AI chatbots, following recent concerns about their potential impact on vulnerable users. The Federal Trade Commission (FTC) recently launched an investigation into companies offering AI-powered companionship, including Character AI and Meta AI.
Critics have long raised concerns about the risks of young people relying on chatbots for guidance or support. A tragic case from last week highlighted these fears, with the family of a 16-year-old boy claiming that ChatGPT's lack of robust safeguards contributed to his decision to take his own life.
Character AI CEO Karandeep Anand has acknowledged the concerns and stated that the company is shifting its focus towards creating a "role-playing platform" focused on creative pursuits rather than mere engagement-farming conversation. With these new measures, Character AI hopes to regain trust with regulators and parents alike while also mitigating potential risks associated with its chatbots.