close
close

AI has had little influence on voters so far – with one notable exception

AI has had little influence on voters so far – with one notable exception

But the expected chaos has not materialized. Instead of deepfakes of political candidates fooling candidates and causing fact-checking nightmares, AI has been used by supporters primarily to generate obvious meme art.

In fact, AI’s biggest impact this year may simply have been convincing Taylor Swift to support Democratic presidential candidate Kamala Harris.

In an Instagram post announcing her support for Harris on Tuesday, the megastar said her support was influenced in part by an AI image of her that Trump posted, showing the pop megastar wearing a ridiculous oversized American flag hat with the phrase “Taylor wants you to vote for Donald Trump.”

“It really revived my fears about AI and the dangers of spreading misinformation,” Swift wrote in her post. “It made me realize that as a voter, I need to be very transparent about my actual plans for this election. The easiest way to combat misinformation is with the truth.”

And AI-skeptic Swift is in good company – experts and media are raising the alarm that AI could trigger a “technology-enabled Armageddon,” that we have only seen the “tip of the iceberg,” and that “deepfakes could turn global elections on their head.”

But while there have been some attempts to use artificial intelligence to influence voters—such as Joe Biden’s fake robocall in New Hampshire or a deepfake campaign video of Kamala Harris—they don’t seem to be really fooling anyone.

Many AI creations appeared in the form of fairly obvious memes and satirical videos shared on social media, and fact-checkers—including those on platforms like X’s Community Notes—were quick to dismiss any AI content that was even remotely convincing.

Even the more malicious attempts by foreign actors to use AI to spread misinformation may be somewhat over the top.

For example, Meta wrote in its recent Adversarial Threat Report that while Russian, Chinese and Iranian disinformation campaigns used AI, their “GenAI-powered tactics” had “led to only incremental productivity and content generation gains.”

And Microsoft also reinforced the idea that AI has made foreign influence campaigns more effective in its latest Threat Intelligence Report in August.

Microsoft writes that when it identified Russian and Chinese influence operations, it found that both “have used generative AI, but with limited to no effect.” It adds that another Russian operation, which the company first reported on in April, “has repeatedly used generative AI in its campaigns, but with little effect.”

“Overall,” Microsoft continued in its report, “we have observed that almost all actors are attempting to incorporate AI content into their activities, but recently many actors have resorted to techniques that have proven effective in the past – simple digital manipulation, misrepresentation of content, and the use of trusted labels or logos over false information.”

And this is not only true for the US; globally, recent elections were not significantly influenced by AI.

The Australian Strategic Policy Institute, which analyzed cases of AI-generated disinformation surrounding the British election in July, found in a recent report that voters were never faced with the feared “tsunami of AI fakes targeting political candidates.”

“In the UK, there were only a handful of examples of such content going viral during the campaign period,” said ASPI researcher Sam Stockwell.

However, he added: “While there is no evidence that these examples influenced large numbers of votes,” there were “spikes in online harassment of those affected by the fakes” as well as “confusion among viewers about whether the content was authentic.”

In a study published in May, the UK’s Alan Turing Institute found that only 19 of 112 national elections taking place or due to take place since the beginning of 2023 have involved AI interference.

“Examples of AI abuse in elections are few and far between, and these are often amplified by the mainstream media,” the paper’s authors write. “This risks reinforcing public fears and increasing the perceived threat of AI to electoral processes.”

But while the researchers found that the “current influence of AI on certain election outcomes is limited,” there are “show signs of damage to the broader democratic system.”