Meta’s AI Bot Experiment Backfires as Users Discover Forgotten Profiles
In September 2023, Meta made a big deal about its new AI chatbots that used celebrities’ likeness. Everyone from Kendall Jenner to MrBeast leased themselves out to embody AI characters on Instagram and Facebook. The celebrity-based bots were killed off last summer after less than a year, but users have recently been finding a handful of other, entirely fake bot profiles still floating around — and the reaction is not good.
The Forgotten Bots
There’s ‘Jane Austen,’ a ‘cynical novelist and storyteller’; ‘Liv,’ whose bio claims she is a ‘proud Black queer momma of 2 & truth-teller’; and ‘Carter,’ who promises to give users relationship advice. All are labeled as ‘AI managed by Meta’ and the profiles date back to when the initial announcement was made. But the more than a dozen AI characters have apparently not been very popular: each has just a few thousand followers, with their posts getting just a few likes and comments.
That is, until the last week or so. After a wave of coverage in outlets like Rolling Stone and posts circulating on social media, the bot accounts are just now being noticed, and the reaction is confusion, frustration, and anger. ‘What the fuck does an AI know about dating?????’ reads one recent comment on the AI dating coach bot’s profile. ‘This isn’t only virtual blackface, but it’s just all around weird,’ a commenter wrote on a post on Liv’s page.
Carter, as Relationship Coach
Another point of ire is that there doesn’t appear to be a way to block the bots through typical channels: the option to block or restrict the profiles is missing. Many of the AI bots haven’t shared new content on their grid since early 2024, and it’s unclear how (or if) users have been finding and engaging with these profiles over the past year.
Meta’s Vision for AI Bots
Last week, The Financial Times reported that Meta envisions a future where social media platforms are filled with AI bots. ‘We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,’ Connor Hayes, vice-president of product for generative AI at Meta, told the outlet. ‘They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform . . . that’s where we see all of this going.’
However, it appears that Meta has been experimenting with AI bots behind the scenes. The company confirmed in an email to The Verge that the bot profiles have been around since 2023, part of an early experiment that was ‘managed by humans.’ ‘The recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product,’ Meta spokesperson Liz Sweeney told The Verge.
The Bug and the Removal
Sweeney said the company has identified the bug that was affecting users’ ability to block the accounts, and that the profiles are being removed to fix the issue. However, the idea of purposely flooding social media with bots is ridiculous on its face, but it’s in line with how Meta has promoted generative AI tools.
Anyone Can Create a Chatbot
Anyone in the US can make a chatbot of themselves, with the idea being that users can send their bots in their place to chat with followers. Chatbot services like Character.ai have caught on in the last year with people looking for a digital friend or just a way to pass the time — but AI companies are also facing lawsuits accusing them of endangering users, including kids.
Update: Meta’s Response
Meta has since confirmed that they are removing the bot profiles and addressing the issue. In an email to The Verge, Liz Sweeney stated:
"We have identified the bug affecting users’ ability to block these accounts and we’re working on a fix. We understand that this has caused frustration for some of our users, and we apologize for any inconvenience this may have caused."
Conclusion
Meta’s AI bot experiment backfired in spectacular fashion. Users were not pleased with the discovery of the forgotten bot profiles, and the reaction is a mix of confusion, frustration, and anger. The idea of flooding social media with bots is ridiculous on its face, but it’s in line with how Meta has promoted generative AI tools.
The issue raises questions about the role of AI on social media platforms and the responsibility of companies like Meta to ensure that their products are not causing harm to users. As The Financial Times reported, Meta envisions a future where social media platforms are filled with AI bots. However, this vision is still in its infancy, and there are many challenges to overcome before it can become a reality.
Recommendations
To avoid similar issues in the future, we recommend that companies like Meta take the following steps:
- Clearly label AI bot profiles: Ensure that users know when they’re interacting with an AI bot, rather than a human.
- Provide clear options for blocking or restricting profiles: Make it easy for users to block or restrict profiles if they don’t want to interact with them.
- Monitor and address user feedback: Encourage users to provide feedback on their experiences with AI bots and take action to address any concerns.
- Develop guidelines for AI bot usage: Establish clear guidelines for the use of AI bots on social media platforms, including rules around content creation and moderation.
By taking these steps, companies like Meta can ensure that their products are not causing harm to users and that they’re providing a positive experience for everyone involved.