How AI Is Quietly Rewriting the Rules of Political Power
The Silent Revolution of Persuasion
Political influence is no longer just a matter of who speaks loudest—but who programs best.
Generative AI now enables political campaigns to produce compelling emails, ads, and even “authentic-sounding” deepfake videos with astonishing scale and speed. What used to take a media team and millions of dollars can now be done by a fine-tuned language model with a smart prompt.
In 2025, we’ve crossed a threshold: AI isn’t just assisting political messaging—it’s actively shaping political outcomes. This article explores how that transformation is unfolding across five domains: campaign strategy, voting behavior, misinformation, deepfakes, and trust in public institutions.

AI-Driven Campaigns: From Targeting to Tailoring
Large language models (LLMs) now allow campaigns to run micro-targeted persuasion experiments at scale.
Recent research (2025, arXiv) shows that AI-generated policy messages can outperform human-written ones in terms of persuasiveness—particularly when tuned post-hoc using reinforcement learning. But there’s a catch: models fine-tuned for persuasion tend to generate less factual contentarxiv.org.
A Stanford-led experiment also found that LLMs marginally increase persuasion when microtargeting is aligned with the recipient’s prior beliefs—but the gains are modestarxiv.org. Still, in swing states or close races, even fractional shifts can tilt outcomes.
Bottom line: AI doesn’t need to swing every voter—it just needs to flip the right ones, faster than you can fact-check.
Voting Behavior: Small Effects, Big Consequences
A growing body of behavioral data suggests that LLMs can subtly influence voting behavior—particularly in undecided voters or low-information populations.
In the U.S. 2024 primaries, experiments showed that exposure to AI-personalized policy messaging slightly increased likelihood to vote for the intended candidate. However, the most effective messages were not the most accurate—they were the most emotionally resonantarxiv.org.
That raises uncomfortable questions about ethical boundaries: Should persuasion trump accuracy? Who decides?
Misinformation and Deepfakes: The Liar’s Dividend in Action
Deepfake technology has moved from viral entertainment into weaponized influence.
In 2024, an AI-generated robocall mimicking Joe Biden discouraged voters in New Hampshire from casting ballots—a move that led to federal criminal chargesvoanews.com. It was a proof of concept for AI-powered voter suppression.
The broader concern isn’t just deception—it’s doubt. As deepfakes grow more convincing, citizens begin to question even real footage, creating what experts call the “liar’s dividend”brennancenter.org.
This erosion of epistemic confidence undermines the entire democratic process. If nothing can be trusted, then everything is suspect—including valid election results.
Public Trust: Collateral Damage
Public confidence in democratic institutions is already under pressure, and generative AI is accelerating the decay.
A U.S.-based study (PMC, 2024) found that exposure to deepfake videos depicting infrastructure failures decreased participants’ trust in government, particularly among individuals with lower education levelspmc.ncbi.nlm.nih.gov.
Another report by HKS noted that deepfake exposure in low-trust environments—like the Slovak 2023 elections—has an outsized effect. Context matters: the less trust citizens have in the media or political systems, the more susceptible they are to AI manipulationmisinforeview.hks.harvard.edu.
Legislation: States Are Moving, Slowly
Governments are waking up.
As of July 2025, 25 U.S. states have passed laws targeting AI-generated deepfakes in electionscitizen.org. Most require disclosure of synthetic media in campaign ads, while others impose criminal penalties for voter deception using AI tools.
The U.S. Senate held a high-profile hearing (S.Hrg. 118-573) on “AI & Election Deepfakes,” focusing on proactive regulation rather than post-election paniccongress.gov.
But progress is uneven. Europe’s AI Act contains some provisions, but enforcement lags. In most jurisdictions, the laws are reactive—playing catch-up with the tech.
Conclusion: We’re Not Ready
Democracy is vulnerable not just to what AI can do—but to what it makes people believe is possible.
Generative AI is not (yet) tearing down electoral systems. But it is quietly eroding the assumptions that hold them together: trust in media, shared reality, and the idea that persuasion should be honest.
If we wait until a synthetic scandal tips an election, it will be too late.
The time to regulate, educate, and inoculate is now.