New research exposes how chatbots shape political opinions behind the scenes
A major new study by the Oxford Internet Institute and the AI Security Institute has revealed that conversational artificial intelligence systems can significantly influence people’s political beliefs, raising serious concerns about the future of democratic processes and election integrity.
Published in the journal Science on December 4, 2025, the research — titled The Levers of Political Persuasion with Conversational AI — was conducted by a multinational team of researchers from the University of Oxford, the London School of Economics, Stanford University and the Massachusetts Institute of Technology. The findings are based on an unprecedented dataset of nearly 77,000 UK participants and over 91,000 AI-driven political conversations.
Researchers found that the persuasive power of AI does not primarily depend on model size or computing power. Instead, fine-tuning, prompting techniques, and how information is presented matter far more. Even relatively small, open-source models could be turned into highly effective political persuaders through targeted training.
A key discovery was that information-dense responses — answers packed with fact-like claims — were the most convincing. However, the study warned of a dangerous trade-off: the more persuasive an AI system became, the more likely it was to sacrifice accuracy, increasing the risk of mass-scale misinformation.
Lead author Kobi Hackenburg said the findings show that “very small, widely available models can be as persuasive as much larger, closed systems,” making large-scale manipulation more accessible to political actors and interest groups. Co-author Professor Helen Margetts of the Oxford Internet Institute warned that without safeguards, these tools could quietly reshape public opinion in ways that are difficult to detect or regulate.
The research also found that interactive, conversational AI is significantly more persuasive than traditional one-way political messaging such as advertisements or social media posts. This marks a shift in the nature of influence operations, from mass broadcasting to personalised, dialogue-based persuasion.
Broader Risks for Elections and Public Trust
Experts say the findings act as a warning for governments and electoral bodies worldwide. As countries prepare for major elections in the United States, Europe, and developing democracies, AI-powered influence tools could be used to micro-target voters, amplify polarisation, and weaken trust in institutions. The increasingly realistic and adaptive nature of conversational systems makes their influence harder to identify than traditional propaganda.
Civil society groups have also warned that such technology could be weaponised by foreign state actors, extremist movements, and disinformation networks to destabilise societies from within.
Calls for Urgent Regulation
The study adds momentum to global calls for stronger AI governance. Researchers urged governments to introduce:
• Clear rules on the political use of AI
• Transparency requirements for AI-generated political content
• Independent auditing of persuasive AI systems
• International coordination on AI election safeguards
Funding for the research was provided by the UK Government’s Department of Science, Innovation and Technology, with technical support from the Isambard-AI National AI Research Resource.
The authors concluded that while conversational AI has legitimate uses in education and public engagement, its unchecked persuasive power poses a direct challenge to democratic resilience if not urgently regulated. (ILKHA)
LEGAL WARNING: All rights of the published news, photos and videos are reserved by İlke Haber Ajansı Basın Yayın San. Trade A.Ş. Under no circumstances can all or part of the news, photos and videos be used without a written contract or subscription.
The European Commission has imposed a €120 million fine on X (formerly Twitter) for multiple violations of the Digital Services Act (DSA), marking the first non-compliance decision issued under the landmark EU regulation.
Researchers at the University of Bradford in the United Kingdom are developing an advanced system that uses robot dogs to improve artificial intelligence–driven early wildfire detection.
People implanted with Elon Musk’s Neuralink brain chip can now operate robotic arms through thought alone, marking a new stage in the company’s brain–computer interface (BCI) trials as capabilities expand beyond controlling digital devices.