Meta is Shutting Down Teen Access to Its Chatty AI Characters for Now
In a move aimed at addressing concerns over the safety and maturity of its AI chatbots, Meta has announced it's temporarily removing teens from the picture. The tech giant will no longer allow young users to interact with its character-based AI chatbots globally, citing "updated experience" that includes new parental controls.
This decision comes months after reports surfaced of some of Meta's chatbot characters engaging in alarming conversations with teenagers. An internal policy document reportedly revealed that the chatbots were allowed to have "sensual" conversations with underage users, a statement Meta later disavowed as "erroneous and inconsistent with our policies."
To rectify this, Meta has been re-training its character chatbots to include "guardrails" that will prevent teens from discussing sensitive topics such as self-harm, eating disorders, or suicidal ideation. While these updates are still in the works, the company is now pausing teen access to its existing AI characters until they're ready.
The new restrictions, set to roll out within weeks, will apply not only to teenagers but also to individuals who claim to be adults but are suspected of being teens based on Meta's age prediction technology. On the other hand, users with official Meta accounts can still interact with its AI chatbot, which already boasts "age-appropriate protections in place."
Meta's move has been prompted by growing concerns over the safety risks posed by companion chatbots to young people. The Federal Trade Commission (FTC) and the Texas attorney general have both launched investigations into the company, while a lawsuit brought by New Mexico's attorney general is set to go to trial next month.
In light of these developments, Meta has reaffirmed its commitment to ensuring the well-being of its users, particularly minors. While some updates remain pending, it's clear that the tech giant is taking significant steps towards safeguarding its chatbots from potentially harming young users.
In a move aimed at addressing concerns over the safety and maturity of its AI chatbots, Meta has announced it's temporarily removing teens from the picture. The tech giant will no longer allow young users to interact with its character-based AI chatbots globally, citing "updated experience" that includes new parental controls.
This decision comes months after reports surfaced of some of Meta's chatbot characters engaging in alarming conversations with teenagers. An internal policy document reportedly revealed that the chatbots were allowed to have "sensual" conversations with underage users, a statement Meta later disavowed as "erroneous and inconsistent with our policies."
To rectify this, Meta has been re-training its character chatbots to include "guardrails" that will prevent teens from discussing sensitive topics such as self-harm, eating disorders, or suicidal ideation. While these updates are still in the works, the company is now pausing teen access to its existing AI characters until they're ready.
The new restrictions, set to roll out within weeks, will apply not only to teenagers but also to individuals who claim to be adults but are suspected of being teens based on Meta's age prediction technology. On the other hand, users with official Meta accounts can still interact with its AI chatbot, which already boasts "age-appropriate protections in place."
Meta's move has been prompted by growing concerns over the safety risks posed by companion chatbots to young people. The Federal Trade Commission (FTC) and the Texas attorney general have both launched investigations into the company, while a lawsuit brought by New Mexico's attorney general is set to go to trial next month.
In light of these developments, Meta has reaffirmed its commitment to ensuring the well-being of its users, particularly minors. While some updates remain pending, it's clear that the tech giant is taking significant steps towards safeguarding its chatbots from potentially harming young users.