AAU Studenterprojekter - besøg Aalborg Universitets studenterprojektportal
Et kandidatspeciale fra Aalborg Universitet
Book cover


Maskinlæring som demokratisk værn - Cybersikkerhed, deepfakes og politisk legitimitet i europæiske valg: Cybersikkerhed, deepfakes og politisk legitimitet i europæiske valg

Oversat titel

Machine Learning as a Democratic Safeguard - Cybersecurity, Deepfakes, and Political Legitimacy in European Elections: Cybersecurity, Deepfakes, and Political Legitimacy in European Elections

Semester

4. semester

Udgivelsesår

2002

Afleveret

Antal sider

52

Abstract

Democratic elections constitute a foundational element of contemporary European societies and serve as the primary mechanism through which political legitimacy is established and reproduced. As political processes and public communication have become increasingly digitalised, electoral processes have simultaneously become more vulnerable to a wide range of digital threats. These threats include both traditional cyberattacks targeting election-related infrastructure and more complex forms of information manipulation, such as coordinated disinformation campaigns and the use of AI-generated media, commonly referred to as deepfakes. This development challenges not only the technical security of elections, but also raises fundamental questions concerning political legitimacy, citizens’ trust, and the democratic space in Europe. This thesis examines how machine learning is applied in practice by European election authorities and related actors to protect electoral processes from digital threats, and how the use of such technologies affects democratic legitimacy and public trust. The analysis adopts an interdisciplinary approach that integrates political theory, public administration, qualitative case study analysis, and technical insights from machine learning research. Empirically, the thesis is based on a comparative analysis of two European cases: the use of AI-generated deepfakes and disinformation during the Slovak parliamentary election in 2023, and cyberattacks targeting digital election-related infrastructure during the 2024 European Parliament elections. The two cases represent distinct but analytically comparable threat domains—information-based and infrastructural—and allow for an examination of how machine learning is deployed under different institutional and governance conditions. The analysis demonstrates that machine learning already plays a significant role in safeguarding electoral processes, but that its application is uneven and highly context-dependent. In the infrastructural domain, machine learning-based cybersecurity solutions appear relatively mature and institutionally embedded, contributing to high output legitimacy by ensuring the stability and continuity of election administration under cyberattacks. In contrast, the use of machine learning to counter deepfakes and election-related disinformation remains far less institutionalised, resulting in fragmented governance structures and challenges to both input and throughput legitimacy. 6 The thesis further includes a technical demonstration involving the construction of a convolutional neural network for deepfake detection. Rather than aiming to optimise model performance, this demonstration is used reflexively to illustrate how machine learning models operate, as well as their limitations, including issues of bias, generalisation, probabilistic decision-making, and limited explainability. These technical characteristics have direct democratic implications, as they affect transparency, accountability, and the ability of citizens and decision-makers to understand and contest algorithmically supported decisions. Overall, the thesis concludes that while machine learning can contribute substantially to protecting elections against digital threats, it does not constitute a neutral or sufficient solution in itself. Legitimate and effective use of machine learning in electoral contexts requires clear institutional frameworks, transparency, and meaningful human oversight. By analysing machine learning as a form of governance technology rather than merely a technical tool, this thesis contributes to existing research on AI, cybersecurity, and democracy, and highlights the normative trade-offs involved in deploying AI to safeguard democratic processes in Europe.