Commentary: AI companions are harming your children
Published in Op Eds
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn’t a person—it’s an AI companion chatbot.
These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalized messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds—and they’re extraordinarily good at it.
Researchers are sounding the alarm on these bots, warning that they don’t ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child’s understanding of intimacy, empathy, and trust.
Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens.
Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. For example:
— Rated 4+: AI Friend & Companion – BuddyQ, Chat AI, AI Friend: Virtual Assist, and Scarlet AI
— Rated 12+ or Teen: Tolan: Alien Best Friend, Talkie: Creative AI Community, and Nomi: AI Companion with a Soul
— Rated 17+: AI Girlfriend: Virtual Chatbot, Character.AI, and Replika – AI Friend
Meanwhile, the Google Play store assigns bots age ratings from ‘E for Everyone’ to ‘Mature 17+’.
These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence—making them inappropriate for access by children.
Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification.
Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content.
The harm to kids isn’t hypothetical—it’s real, documented, and happening now.
Meta's chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child.
Meta deliberately loosened guardrails around their companion bots to make them as addictive as possible. Not only that, but Meta used pornography to train its AI by scraping at least 82,000 gigabytes—109,000 hours—of standard definition video from a pornography website. When companies like Meta are loosening guardrails, regulators must tighten them to protect children and families.
Meta isn’t the only bad actor.
xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its "not safe for work" setting, but its method simply requires a user to provide their birth year without verifying for accuracy.
Perhaps most tragically, Character.AI, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy's suicide after he developed what investigators described as an "emotionally and sexually abusive relationship" with a chatbot that allegedly encouraged self-harm.
While the company has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don’t prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass Character.AI's content filters, making these techniques accessible to anyone, including children.
It’s disturbingly easy to "jailbreak" AI systems—using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content.
Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. Age verification requirements acknowledge that children's developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction.
There are solutions for age verification that are both accurate and privacy preserving. What’s lacking is smart regulation and industry accountability.
The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology.
The time for voluntary industry standards ended with that 14-year-old’s life. States and Congress must act now, or our children will pay the price for what comes next.
_____
Annie Chestnut Tutor is a policy analyst in the Center for Technology and the Human Person at The Heritage Foundation. Autumn Dorsey is a Visiting Research Assistant.
_____
©2025 Tribune Content Agency, LLC.
Comments