Protecting elections in the age of generative AI

Dr. Chris Tenove, assistant director at the Centre for the Study of Democratic Institutions, discusses the risks and potential positive uses of generative AI in elections.

The impact of generative AI tools such as ChatGPT on democratic elections has become a pressing concern.

new report from the Centre for the Study of Democratic Institutions (CSDI) at UBC’s School of Public Policy and Global Affairs investigates the growing threat that emerging, insidious tactics such as deepfake robocalls and AI-generated harassment campaigns pose to democracy — while also calling for a balanced response.

We spoke with co-lead author Dr. Chris Tenove, CSDI’s assistant director, about the risks and potential positive uses of generative AI in elections.

What motivated this report?

Our centre has studied digital threats to the quality of elections for several years. It’s increasingly clear that generative AI will have a major impact on political discourse and on election-related communications. Leading up to the extensive elections around the world in 2024, some were already billing this as the “year of deepfake elections.” Policy makers around the world are trying to figure out what to do, and so we thought this was an important issue to tackle.

Generative AI isn’t all bad, but you found that the bad currently outweighs the good when it comes to elections. Why is that?

Using new technology to improve elections is hard, because good political participation and fair electoral competition are hard to promote. For instance, it’s harder to facilitate meaningful engagements between citizens than it is to create toxic content that you can pump out online.

We hope that as generative AI systems develop, they will become more useful at positively facilitating some of the more important aspects of democratic elections. But looking at what has happened in the last year or so, there are lots of examples of harmful uses and relatively few of the positive ones.

Can you provide some examples of harmful uses?

One of the most obvious examples was the cloning of Joe Biden’s voice in the New Hampshire primary. An AI-generated version of his voice was distributed through “robocalls” to thousands of people, encouraging people not to vote for him in the upcoming primary but to wait for the election in November.

Another widespread and concerning use of AI is to harass candidates — particularly women — by creating sexually explicit deepfake videos and images of them. In the leadup to the recent U.K. election, an investigation found a website that had hundreds of synthetic images of women candidates. But this is a global problem.

Why does your report advise against overreacting to such risks?

Research tells us that if people believe there is widespread disinformation or use of deepfakes, they are more likely to believe the entire information system, including news media, is untrustworthy — or maybe even that democratic institutions and election outcomes themselves are untrustworthy.

It’s good for people to be critical and skeptical, but not to feel disempowered or cynical. People can and do find accurate information. Elections can be conducted with integrity. We need to be vigilant in addressing some of the risks that generative AI poses, but not fatalistic.

How do we address it?

Big AI companies certainly have a role to play, but other sectors also have important roles. Social media platforms or even telecoms need to help keep people from being deceived by synthetic or deepfaked content. Information providers like journalists and election monitoring bodies such as Elections Canada need to get good information out there, including corrections to falsehoods.

The other thing we highlight is accountability for individuals, political parties, and companies that are being deceptive or contributing to harassment. In New Hampshire, we quickly learned who was responsible for the Biden voice-cloning, and there were charges and fines imposed. But in Slovakia last year, there was a misleading audio deepfake of a party leader on the eve of the election, and it’s still not known who did it. No one’s been held to account. And unfortunately, globally, I think that latter case is more the rule.

We don’t think generative AI is going to destroy democracy or anything like that. But we do think governments and citizens in Canada should be ready for its use and misuse in elections.