The era of clumsy troll farms and leaked emails is ending. A new report warns that artificial intelligence is poised to unleash disinformation campaigns of unprecedented scale and sophistication, threatening the foundations of democratic societies. Instead of armies of human posters, a single individual could soon command thousands of AI-controlled social media accounts, capable of manipulating public opinion in real time—without constant human oversight.
The Evolution of Disinformation
In 2016, Russia’s Internet Research Agency (IRA) employed hundreds of people to spread divisive content online. While widely reported, this effort’s impact was limited compared to more targeted leaks. Now, the same goals can be achieved with exponentially greater efficiency and reach. The latest AI tools can generate indistinguishable human-like posts, adapt dynamically to conversations, and maintain persistent online identities.
The shift is not merely about automation; it’s about autonomous influence operations. These AI “swarms” won’t just post pre-written messages; they will learn, evolve, and refine their tactics based on real-time feedback from social media platforms and human interactions.
The Science Behind the Threat
A new study published in Science by 22 experts in AI, cybersecurity, and social science details this imminent shift. The researchers warn that the technology can now mimic human social dynamics so effectively that it could trigger society-wide shifts in belief, swaying elections, and ultimately undermining democratic processes.
“Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” the report states.
Lukasz Olejnik, a senior research fellow at King’s College London, agrees: “This is an extremely challenging environment for a democratic society. We’re in big trouble.”
How AI Swarms Will Operate
The key is memory. Unlike traditional bots, these AI agents can maintain consistent online personas over time. They will coordinate to achieve shared objectives while appearing as individual users, making detection far more difficult.
These systems will also exploit the very mechanisms of social media:
- Micro-A/B testing: Running millions of variations of a message to identify the most effective framing.
- Targeted messaging: Tailoring content to the beliefs and cultural cues of specific communities for maximum impact.
- Self-improvement: Using user responses to refine their tactics in real-time.
“What if AI wasn’t just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none?” asks Nina Jankowicz, CEO of the American Sunlight Project.
The Lack of Defense
Current detection methods are already failing. Social media platforms are increasingly restrictive in sharing data, making it impossible to fully assess the extent of this threat. Experts believe the tactic is already being tested, with likely deployment in the 2028 presidential election.
The problem isn’t just technical; it’s structural. Social media companies prioritize engagement over truth, meaning they have little incentive to identify or remove AI swarms. Governments, too, lack the political will to intervene.
The Proposed Solution
The report suggests creating an “AI Influence Observatory” composed of academics and NGOs to standardize evidence, improve situational awareness, and coordinate a rapid response. The study explicitly excludes social media executives, as their business models incentivize disinformation.
However, even this approach is unlikely to be enough. The speed and scale of AI-driven disinformation will soon outpace any defensive measures.
The rise of AI swarms is not a distant threat; it is an unfolding reality. Unless drastic action is taken, the future of democracy may depend on whether humanity can recognize and resist manipulation at machine speed.


























