Deepfakes in the 2025 Canadian Election: Prevalence, Partisanship, and Platform Dynamics
Concerns about AI-generated political content are growing, yet there is limited empirical evidence on how deepfakes actually appear and circulate across social platforms during major events in democratic countries. In this study, we present one of the first in-depth analyses of how these realistic synthetic media shape the political landscape online, focusing specifically on the 2025 Canadian federal election. By analyzing 187,778 posts from X, Bluesky, and Reddit with a high-accuracy detection framework trained on a diverse set of modern generative models, we find that 5.86% of election-related images were deepfakes. Right-leaning accounts shared them more frequently, with 8.66% of their posted images flagged compared to 4.42% for left-leaning users, often with defamatory or conspiratorial intent. Yet, most detected deepfakes were benign or non-political, and harmful ones drew little attention, accounting for only 0.12% of all views on X. Overall, deepfakes were present in the election conversation, but their reach was modest, and realistic fabricated images, although less common, drew higher engagement, highlighting growing concerns about their potential misuse.
💡 Research Summary
This paper presents a comprehensive empirical analysis of the prevalence, partisanship, and platform dynamics of AI-generated images, or “deepfakes,” during the 2025 Canadian federal election. Addressing the lack of real-world data on how synthetic media circulates in democratic events, the study analyzes 187,778 image-containing posts collected from X, Bluesky, and Reddit over the critical week surrounding the election (April 24-29).
The methodology employed a multi-layered approach. First, a high-precision deepfake detection model was used, built on a ConvNeXt-V2-Base architecture and trained on a diverse dataset (OpenFake) encompassing outputs from state-of-the-art generators like SDXL, DALL-E, and Flux. This detector was optimized for real-world political contexts, achieving an F1-score of 0.852 on a held-out evaluation set. Second, for each detected deepfake, a vision-language model (Qwen3-VL) was prompted to classify the communicative intent of the post into one of seven categories: Defamatory, Conspiratorial, Propaganda, Fabricated (realistic mimics), Benign, Hate, or Non-political. Third, the political leaning of each post’s author was inferred by analyzing their five most recent tweets with a large language model (Llama 3.3 70B), categorizing them as left-leaning, right-leaning, or unknown.
The key findings are multifaceted. Overall, 5.86% of all election-related images across platforms were detected as deepfakes, with prevalence varying significantly: X had the highest rate at 7.9%, followed by Reddit (3.3%) and Bluesky (2.2%). A clear partisan divide emerged: right-leaning accounts shared deepfakes more frequently, with 8.66% of their posted images flagged as synthetic compared to 4.42% for left-leaning accounts. Furthermore, the intent of deepfakes shared by right-leaning users skewed more towards defamatory and conspiratorial content, whereas left-leaning users shared a higher proportion of non-political deepfakes.
Despite their presence, the reach and impact of deepfakes were found to be modest. Analyzing view counts on X six months after posting revealed that deepfake posts consistently received fewer views than non-deepfake posts across all political leanings. Deepfakes accounted for only 0.52% of all views on X during the period, and politically harmful deepfakes (defamatory, conspiratorial, hate) constituted a mere 0.12% of total views. Most deepfakes were benign or non-political (together 49.41% of cases). The spread was primarily driven by accounts with smaller follower counts, indicating diffusion within smaller, fringe communities rather than amplification by major influencers.
An important exception and cause for concern was the “Fabricated” intent category. Although less common, these highly realistic synthetic images that mimic news or evidentiary content received higher median view counts than other political categories (except benign/non-political). This suggests that while the volume of impactful deepfakes is currently low, the potential for harm is disproportionately high for credible, targeted fabrications that could be mistaken for real evidence in tense political moments.
The study concludes that while deepfakes are a nascent but established part of the online political ecosystem, their direct influence on large-scale attention during the 2025 Canadian election was limited. The greater threat lies not in a flood of low-quality fakes, but in the strategic deployment of few, highly plausible synthetic images that can exploit credibility gaps. The authors advocate for the development of robust, publicly accessible detection tools to help preserve trust in democratic information as synthetic media technology advances.
Comments & Academic Discussion
Loading comments...
Leave a Comment