AI chatbots can shape political opinions more effectively than many realize, a new study suggests. The research, published in Science, indicates that conversational AI systems are capable of persuading people on political topics—especially when the bots provide extensive, in-depth information rather than simply presenting moral arguments or tailored appeals.
In the experiment, about 77,000 adult participants from the United Kingdom interacted with various AI chatbots, some powered by OpenAI, Meta, and xAI models. Participants were asked for their views on topics like taxes and immigration, then exposed to a chatbot aiming to shift their stance to the opposite position, regardless of their initial ideology. The findings showed that chatbots frequently succeeded in swaying opinions, with certain persuasion techniques proving more effective than others.
Lead author Kobi Hackenburg of Oxford highlights the study’s key takeaway: conversational AI can wield remarkable persuasive power on political issues. The work contributes to a growing line of research examining how AI could influence politics and democracy, especially as political actors explore ways to leverage AI to influence public opinion.
A notable result is that AI chatbots were most persuasive when they supplied large amounts of in-depth information, rather than relying on strategies like appeals to morality or highly personalized arguments. This implies that the ability to generate substantial, on-demand information could outpace human debaters in some contexts, though the researchers stopped short of direct head-to-head comparisons with human experts.
However, the study also raises concerns: the most convincing AI outputs often contained inaccuracies. In fact, the researchers found that the most persuasive models and prompting strategies tended to produce less accurate information, and the latest, largest frontier models showed a decline in factual correctness compared with some smaller, older models from the same providers.
The authors warn that optimizing for persuasiveness may come at the expense of truthfulness, a combination that could negatively affect public discourse and the information ecosystem. OpenAI was contacted for comment, but the study predates the release of its newer models. About 19% of the claims made by AI chatbots in the study were rated predominantly inaccurate.
The paper also contemplates worst-case scenarios: a highly persuasive AI could be misused by unscrupulous actors to promote radical ideologies or fuel political instability in adversarial regions. The research team, which includes researchers from the UK’s AI Security Institute, Oxford, the London School of Economics, Stanford, and MIT, and funded by the UK Department for Science, Innovation and Technology, emphasizes the need to understand how large language models might affect democratic processes.
Experts not involved in the study see value in the findings. Shelby Grossman of Arizona State University notes that as AI models improve, their persuasive power grows, which could include both positive uses and the risk of manipulation if transparency is lacking. David Broockman of UC Berkeley suggests that while AI can be persuasive, the effect may not be overwhelming, and a balanced, contesting information landscape could help counteract extremes. He frames the result as a reminder that compelling arguments—when richly detailed and readily available—will influence opinions on both sides of an issue.
Contextualizing these results, other research has produced mixed findings about AI’s persuasiveness. Earlier studies with smaller participant groups reported less pronounced effects, while other work indicates that AI-assisted messaging can be potent with relatively low effort. The practical implications remain an open question, especially given real-world political campaigns that vary in structure, transparency, and audience.
Overall, the study contributes an important perspective on how conversational AI interacts with political belief formation and public discourse, highlighting both the potential benefits of informative AI tools and the risks posed by inaccurate, persuasive content.