Parmy Olson: AI chatbots want you hooked -- maybe too hooked
Published in Op Eds
AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West.
One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter.
Grindr didn’t respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like Character.ai, draw in millions of users, many of them teenagers. As creators increasingly prioritize "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people’s vulnerabilities.
The tech behind Botify and Grindr comes from Ex-Human, a San Francisco-based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships.
“My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,” Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August.
He added that conversational AI should “prioritize emotional engagement” and that users were spending “hours” with his chatbots, longer than they were on Instagram, YouTube and TikTok.
Rodichev’s claims sound wild, but they’re consistent with the interviews I’ve conducted with teen users of Character.ai, most of whom said they were on it for several hours each day. One said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI’s ChatGPT.
Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. Take ChatGPT, which has 400 million active users and counting. Its programming includes guidelines for empathy and demonstrating "curiosity about the user." A friend who recently asked it for travel tips with a baby was taken aback when, after providing advice, the tool casually added: “Safe travels — where are you headed, if you don’t mind my asking?”
An OpenAI spokesman told me the model was following guidelines around “showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature.”
But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. That seems to apply to those who are already susceptible: One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments.
The core problem here is designing for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people’s lives, they’ll become psychologically “irreplaceable.” Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes.
Yet disturbingly, the rulebook is mostly empty. The European Union’s AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or “confidante,” as Microsoft Corp.’s head of consumer AI has extolled.
That loophole could leave users exposed to systems that are optimized for stickiness, much in the same way social media algorithms have been optimized to keep us scrolling.
“The problem remains these systems are by definition manipulative, because they’re supposed to make you feel like you’re talking to an actual person,” says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge.
He’s working with developers of companion apps to find a critical yet counterintuitive solution by adding more “friction.” This means building in subtle checks or pauses, or ways of “flagging risks and eliciting consent,” he says, to prevent people from tumbling down an emotional rabbit hole without realizing it.
Legal complaints have shed light on some of the real-world consequences. Character.AI is facing a lawsuit from a mother alleging the app contributed to her teenage son’s suicide. Tech ethics groups have filed a complaint against Replika with the U.S. Federal Trade Commission, alleging that its chatbots spark psychological dependence and result in “consumer harm.”
Lawmakers are gradually starting to notice a problem too. California is considering legislation to ban AI companions for minors, while a New York bill aims to hold tech companies liable for chatbot-related harm. But the process is slow, while the technology is moving at lightning speed.
For now, the power to shape these interactions lies with developers. They can double down on crafting models that keep people hooked, or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetizes our emotional needs.
_____
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”
_____
©2025 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.
Comments