Politics

/

ArcaMax

Swarms of AI bots can sway people’s beliefs – threatening democracy

Filippo Menczer, Indiana University, The Conversation on

Published in Political News

In mid-2023, around the time Elon Musk rebranded Twitter as X but before he discontinued free academic access to the platform’s data, my colleagues and I looked for signs of social bot accounts posting content generated by artificial intelligence. Social bots are AI software that produce content and interact with people on social media. We uncovered a network of over a thousand bots involved in crypto scams. We dubbed this the “fox8” botnet after one of the fake news websites it was designed to amplify.

We were able to identify these accounts because the coders were a bit sloppy: They did not catch occasional posts with self-revealing text generated by ChatGPT, such as when the AI model refused to comply with prompts that violated its terms. The most common self-revealing response was “I’m sorry, but I cannot comply with this request as it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.”

We believe fox8 was only the tip of the iceberg because better coders can filter out self-revealing posts or use open-source AI models fine-tuned to remove ethical guardrails.

The fox8 bots created fake engagement with each other and with human accounts through realistic back-and-forth discussions and retweets. In this way, they tricked X’s recommendation algorithm into amplifying exposure to their posts and accumulated significant numbers of followers and influence.

Such a level of coordination among inauthentic online agents was unprecedented – AI models had been weaponized to give rise to a new generation of social agents, much more sophisticated than earlier social bots. Machine-learning tools to detect social bots, like our own Botometer, were unable to discriminate between these AI agents and human accounts in the wild. Even AI models trained to detect AI-generated content failed.

Fast-forward a few years: Today, people and organizations with malicious intent have access to more powerful AI language models – including open-source ones – while social media platforms have relaxed or eliminated moderation efforts. They even provide financial incentives for engaging content, irrespective of whether it’s real or AI-generated. This is a perfect storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-controlled bot swarm could create the false impression of widespread, bipartisan opposition to a political candidate.

The current U.S. administration has dismantled federal programs that combat such hostile campaigns and defunded research efforts to study them. Researchers no longer have access to the platform data that would make it possible to detect and monitor these kinds of online manipulation.

I am part of an interdisciplinary team of computer science, AI, cybersecurity, psychology, social science, journalism and policy researchers who have sounded the alarm about the threat of malicious AI swarms. We believe that current AI technology allows organizations with malicious intent to deploy large numbers of autonomous, adaptive, coordinated agents to multiple social media platforms. These agents enable influence operations that are far more scalable, sophisticated and adaptive than simple scripted misinformation campaigns.

Rather than generating identical posts or obvious spam, AI agents can generate varied, credible content at a large scale. The swarms can send people messages tailored to their individual preferences and to the context of their online conversations. The swarms can tailor tone, style and content to respond dynamically to human interaction and platform signals such as numbers of likes or views.

In a study my colleagues and I conducted last year, we used a social media model to simulate swarms of inauthentic social media accounts using different tactics to influence a target online community. One tactic was by far the most effective: infiltration. Once an online group is infiltrated, malicious AI swarms can create the illusion of broad public agreement around the narratives they are programmed to promote. This exploits a psychological phenomenon known as social proof: Humans are naturally inclined to believe something if they perceive that “everyone is saying it.”

Such social media astroturf tactics have been around for many years, but malicious AI swarms can effectively create believable interactions with targeted human users at a large scale, and get those users to follow the inauthentic accounts. For example, agents can talk about the latest game to a sports fan and about current events to a news junkie. They can generate language that resonates with the interests and opinions of their targets.

 

Even if individual claims are debunked, the persistent chorus of independent-sounding voices can make radical ideas seem mainstream and amplify negative feelings toward “others.” Manufactured synthetic consensus is a very real threat to the public sphere, the mechanisms democratic societies use to form shared beliefs, make decisions and trust public discourse. If citizens cannot reliably distinguish between genuine public opinion and algorithmically generated simulation of unanimity, democratic decision-making could be severely compromised.

Unfortunately, there is not a single fix. Regulation granting researchers access to platform data would be a first step. Understanding how swarms behave collectively would be essential to anticipate risks. Detecting coordinated behavior is a key challenge. Unlike simple copy-and-paste bots, malicious swarms produce varied output that resembles normal human interaction, making detection much more difficult.

In our lab, we design methods to detect patterns of coordinated behavior that deviate from normal human interaction. Even if agents look different from each other, their underlying objectives often reveal patterns in timing, network movement and narrative trajectory that are unlikely to occur naturally.

Social media platforms could use such methods. I believe that AI and social media platforms should also more aggressively adopt standards to apply watermarks to AI-generated content and recognize and label such content. Finally, restricting the monetization of inauthentic engagement would reduce the financial incentives for influence operations and other malicious groups to use synthetic consensus.

While these measures might mitigate the systemic risks of malicious AI swarms before they become entrenched in political and social systems worldwide, the current political landscape in the U.S. seems to be moving in the opposite direction. The Trump administration has aimed to reduce AI and social media regulation and is instead favoring rapid deployment of AI models over safety.

The threat of malicious AI swarms is no longer theoretical: Our evidence suggests these tactics are already being deployed. I believe that policymakers and technologists should increase the cost, risk and visibility of such manipulation.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Filippo Menczer, Indiana University

Read more:
Grok’s antisemitic rant shows how generative AI can be weaponized

Helper bots in online communities diminish human interaction

AI-generated text is overwhelming institutions – setting off a no-win ‘arms race’ with AI detectors

Filippo Menczer receives funding from Knight Foundation, National Science Foundation, Swiss National Science Foundation, and Air Force Office of Scientific Research.


 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

John Darkow RJ Matson Mike Beckom Mike Smith Margolis and Cox Christopher Weyant