AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Detecting Synthetic Media: Generative AI and its impact for Cybersecurity

Author

Term

4. semester

Publication year

2026

Submitted on

Abstract

Dette speciale undersøger, om dyb læring kan bruges som en pålidelig forsvarslinje mod AI‑genereret syntetisk medieindhold (deepfakes), og hvordan sådanne løsninger bør indgå i en bredere cybersikkerhedspraksis. Tre modelarkitekturer—ResNet50, EfficientNet‑B0 og Vision Transformer—blev trænet på et balanceret datasæt med ægte ansigter fra FFHQ og syntetiske ansigter fra StyleGAN. Ydelsen blev vurderet på kendte data, på ukendte genereringsmetoder (out‑of‑distribution), i et menneske‑mod‑model forsøg (“Deepfake Game”) samt via en demografisk bias‑analyse. ResNet50 var den stærkeste model med 94,85% nøjagtighed og 95,31% recall på testdata. I den menneskelige vurdering opnåede brugere 57,00% nøjagtighed, mens den bedste model nåede 90,32%. Samtidig afslørede resultaterne væsentlige svagheder: nøjagtigheden faldt til 62,95% mod nye, ukendte genereringsmetoder, og en bias‑audit viste bekymrende skævheder, særligt for EfficientNet‑B0 med et nøjagtighedsgab på 17% mellem majoritets‑ og minoritetsgrupper. Specialet konkluderer, at AI‑detektion kan styrke sikkerhedsteams, men bør implementeres med en Human‑in‑the‑Loop tilgang for at håndtere algoritmisk skrøbelighed og bias.

This thesis examines whether deep learning can serve as a reliable defense against AI‑generated synthetic media (deepfakes) and how such tools should be integrated into broader cybersecurity practice. Three architectures—ResNet50, EfficientNet‑B0, and Vision Transformer—were trained on a balanced dataset of real faces from FFHQ and synthetic faces from StyleGAN. Performance was evaluated on in‑distribution tests, out‑of‑distribution samples from unseen generators, a human‑versus‑model trial (“Deepfake Game”), and a demographic bias audit. ResNet50 delivered the strongest results with 94.85% accuracy and 95.31% recall on the test set. In human visual verification, participants achieved 57.00% accuracy, whereas the best model reached 90.32%. At the same time, key weaknesses emerged: accuracy dropped to 62.95% on unseen generation methods, and the bias audit revealed concerning disparities, particularly for EfficientNet‑B0 with a 17% accuracy gap between majority and minority groups. The thesis concludes that AI detection can significantly aid security teams but should be deployed within a Human‑in‑the‑Loop workflow to manage brittleness and bias.

[This abstract was generated with the help of AI]