AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Understanding Privacy Threats in Machine Learning as a Service (MLaaS)

Author

Term

4. semester

Publication year

2025

Abstract

This thesis examines privacy risks in Machine Learning as a Service (MLaaS) through the lens of membership inference attacks (MIAs), where an adversary seeks to determine whether a specific record was used to train a model. It asks how differential privacy and regularization influence model generalization and privacy leakage, and how these effects vary with dataset type and training setup. The study evaluates three practical black-box attack settings—score-based, one-shadow-model, and label-only transfer attacks—on both natural image datasets (CIFAR-10, CIFAR-100) and medical image datasets (OCTMNIST, RetinaMNIST, PathMNIST). Models are trained under four regimes: non-private baselines, regularized training, differentially private training with DP-SGD, and DP-SGD combined with regularization, with transfer learning considered for more complex tasks. Across experiments, overfitting is observed to amplify privacy leakage, while differential privacy and regularization reduce it by promoting stability and better generalization. Transfer learning improves robustness on complex datasets, and on smaller medical datasets, the noise introduced by DP can even improve utility. Overall, the results show that privacy and utility in MLaaS are jointly shaped by dataset complexity, model design, and training configuration, underscoring the need for balanced training strategies to deploy models securely and reliably.

Dette speciale undersøger privatlivsrisici i Machine Learning as a Service (MLaaS) gennem medlemskabsinferensangreb (MIA’er), hvor en modstander forsøger at afgøre, om et bestemt datapunkt indgik i træningen af en model. Formålet er at klarlægge, hvordan differential privacy og regularisering påvirker generaliserbarhed og privatlivslækage, og hvordan effekterne varierer med datasæt og træningsopsætning. Studiet evaluerer tre praktiske sort-boks-scenarier—score-baserede angreb, skygge-model-angreb og label-only transfer-angreb—på naturlige billeddatasæt (CIFAR-10, CIFAR-100) og medicinske datasæt (OCTMNIST, RetinaMNIST, PathMNIST). Modeller trænes under fire regimer: ikke-private baselines, regulariseret træning, differentielt privat træning med DP-SGD samt DP-SGD kombineret med regularisering, med transfer learning anvendt til mere komplekse opgaver. På tværs af eksperimenter ses overtilpasning at øge privatlivslækage, mens differential privacy og regularisering reducerer den ved at fremme stabilitet og bedre generalisering. Transfer learning øger robustheden på komplekse datasæt, og på mindre medicinske datasæt kan DP-støj endda forbedre nytteværdien. Samlet viser resultaterne, at privatliv og nytte i MLaaS formes i samspil af datasætkompleksitet, modeldesign og træningskonfiguration, hvilket understreger behovet for afbalancerede træningsstrategier for sikker og pålidelig implementering.

[This apstract has been generated with the help of AI directly from the project full text]