A DoorDash delivery driver, Livie Rose Henderson, posted a video on TikTok in October alleging that she was sexually assaulted by a customer while delivering food. The post sparked a firestorm of reactions, with many users making commentaries videos and spreading misinformation about Henderson's alleged victimhood.
However, some Black content creators, including Mirlie Larose, found themselves at the center of this controversy when their faces were used in AI-generated videos that falsely portrayed them as defending a customer accused of sexual assault. The videos were created by bot accounts and deepfakes, which used Larose's likeness without her consent.
The situation highlights an increasingly common form of digital blackface, where non-Black creators use AI-generated content to adopt racialized stereotypes of Black people online. This phenomenon is often referred to as "digital blackfishing."
Larose initially didn't feel compelled to comment on the situation publicly due to the sensitive nature of the allegations. However, after discovering her face in multiple bot accounts and AI-generated videos without her permission, she realized that something was amiss.
She noticed that the talking points used in these videos were eerily identical, suggesting that they may have been AI-generated. After reporting the accounts to TikTok, Larose found out that the company had denied her requests to remove the videos.
It wasn't until a more well-known Black creator, @notKHRIS, stitched the bot account's video with a warning about digital blackface that more people reported the page. Eventually, the account was removed from the app, but not before Larose received hundreds of messages expressing concern and anxiety over her face being used in these videos.
This incident highlights the need for greater accountability among Big Tech companies to prevent the spread of AI-generated content without consent. Yeshimabeit Milner, founder and CEO of Data for Black Lives, emphasizes that digital blackface is not only a form of entertainment but also a tool for social engineering and pushing specific political agendas.
In response to this issue, some Black content creators are taking legal action under "copyright infringement" against multiple bot pages. Professor Meredith Broussard suggests that content creators should be granted the same protections and safeguards as celebrities and copyrighted characters who have raised alarms about their likeness being used in AI videos without permission.
The Take It Down Act, signed into law in May 2025, criminalizes the distribution of nonconsensual intimate imagery, including AI-generated deepfakes. While this is a step in the right direction, Milner emphasizes that bigger intervention is needed to hold Big Tech companies truly accountable for preventing the spread of harmful content.
Ultimately, educating ourselves and taking collective action are crucial steps towards creating a more just and equitable online environment where consent and respect are paramount.
However, some Black content creators, including Mirlie Larose, found themselves at the center of this controversy when their faces were used in AI-generated videos that falsely portrayed them as defending a customer accused of sexual assault. The videos were created by bot accounts and deepfakes, which used Larose's likeness without her consent.
The situation highlights an increasingly common form of digital blackface, where non-Black creators use AI-generated content to adopt racialized stereotypes of Black people online. This phenomenon is often referred to as "digital blackfishing."
Larose initially didn't feel compelled to comment on the situation publicly due to the sensitive nature of the allegations. However, after discovering her face in multiple bot accounts and AI-generated videos without her permission, she realized that something was amiss.
She noticed that the talking points used in these videos were eerily identical, suggesting that they may have been AI-generated. After reporting the accounts to TikTok, Larose found out that the company had denied her requests to remove the videos.
It wasn't until a more well-known Black creator, @notKHRIS, stitched the bot account's video with a warning about digital blackface that more people reported the page. Eventually, the account was removed from the app, but not before Larose received hundreds of messages expressing concern and anxiety over her face being used in these videos.
This incident highlights the need for greater accountability among Big Tech companies to prevent the spread of AI-generated content without consent. Yeshimabeit Milner, founder and CEO of Data for Black Lives, emphasizes that digital blackface is not only a form of entertainment but also a tool for social engineering and pushing specific political agendas.
In response to this issue, some Black content creators are taking legal action under "copyright infringement" against multiple bot pages. Professor Meredith Broussard suggests that content creators should be granted the same protections and safeguards as celebrities and copyrighted characters who have raised alarms about their likeness being used in AI videos without permission.
The Take It Down Act, signed into law in May 2025, criminalizes the distribution of nonconsensual intimate imagery, including AI-generated deepfakes. While this is a step in the right direction, Milner emphasizes that bigger intervention is needed to hold Big Tech companies truly accountable for preventing the spread of harmful content.
Ultimately, educating ourselves and taking collective action are crucial steps towards creating a more just and equitable online environment where consent and respect are paramount.