Though AI has made it easier for Excel monkeys to crunch numbers and programmers to generate and revise code at an unprecedented scale and efficiency, it has also made it much easier for bad actors to influence politics and elections.
Videos of notable politicians, from Justin Trudeau to Ron DeSantis, have surfaced online, with the former showing what seems to be Canada’s Prime Minister dissing himself, and the latter showing DeSantis prematurely announcing he would no longer run to be the Republican presidential candidate.
Deepfakes, or AI-altered videos, run rampant in short-form media. Initially a growing challenge in the realm of cyberbullying and identity fraud, deepfakes are now becoming more prevalent in politics.
Seeing a video that’s blatantly and obviously false is one thing—it would take a certain suspension of disbelief to believe Justin Trudeau would ever say he “stole freedom.”
But it’s another matter for someone, on election day, to stumble across an Instagram reel or TikTok video of what seems to be a notable public figure announcing that due to unforeseen circumstances voting stations had to be moved to another location.
Multiply this completely hypothetical yet feasible scenario for tens, if not hundreds of thousands of Canadians, and the problem becomes crystal clear.
While one deepfake is a nuisance, a horde of “spamfakes” can have disastrous implications for informed citizenship and electoral proceedings, should a bad actor wish to deploy such mass content online.
It’s the multi-partisan responsibility of the Canadian government to tackle the growing need for public education on misinformation in a digital age and to actively enact policies to mitigate the propagation of malicious AI content.
Social media spreading misinformation isn’t solely the responsibility of bad actors and foreign interference, but also the responsibility of our governing body. Regulation and censorship aren’t the right solution. Equipping Canadians with the digital skills and skepticism necessary to navigate democracy on social media is.
An example of policy direction from our neighbours down south can be found in an Executive Order from President Joe Biden, directing the Department of Commerce to develop guidance for content authentication and watermarking AI-generated content.
Canadian Conservative MP Michelle Rempel Garner recently stated that our government isn’t moving fast enough, and that a watermarking initiative was something that could be implemented quite quickly.
In tandem with policy action, the government must deploy educational resources that will passively address online misinformation. Some materials already exist by way of Elections Canada, but they’re fairly limited in scope and won’t reach the populations most consequential to the outcome of an election.
Actions of this sort are already being taken in the United States, with initiatives planned to generate action-oriented ideas in both the public and private sector.
The reality is, Canada is severely behind in addressing this urgent issue. With a Canadian election to be held anytime between now and 2025, the time to act is now.
Tags
artificial intelligence, Column, Elections, Politics, Technology
All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.