Testing of AI Models for Air-Interface Applications
Author
Ivanovic, Filip
Term
4. term
Education
Publication year
2023
Submitted on
2023-05-30
Pages
123
Abstract
Artificial intelligence (AI) is rapidly entering wireless communications, especially the air interface—the wireless link that carries signals between devices and networks. As AI is adopted, there is a parallel need to test it to understand its behavior and performance. This thesis, conducted in collaboration with Keysight Technologies and using their xpl[AI]ned framework, examines the usability and relevance of several testing methods for AI models designed for the air interface domain. The methods were researched and applied to two AI models chosen to be representative of potential real-world implementations. Two main methods were evaluated: Monte Carlo dropout and the Fast Gradient Sign Method (FGSM) for adversarial robustness testing. Monte Carlo dropout—running the model multiple times with dropout to approximate uncertainty—was useful for assessing how efficiently the models were constructed, but its uncertainty estimates were not useful in this context. FGSM—which adds small, targeted perturbations to inputs to expose vulnerabilities—was extremely useful for showing whether a model is susceptible to such perturbations and for generating adversarial examples that help further analyze model characteristics. Additional methods and test paths were also explored for further insight. Overall, the results strongly support the xpl[AI]ned concept and clarify which testing approaches are most relevant for air-interface AI models.
Kunstig intelligens (AI) finder hurtigt vej ind i trådløs kommunikation, især i air interface – den trådløse radiogrænseflade, der bærer signaler mellem enheder og netværk. Når AI tages i brug, opstår der også behov for at teste den, så vi forstår, hvordan den opfører sig, og hvor godt den præsterer. Dette speciale, udført i samarbejde med Keysight Technologies og med brug af deres xpl[AI]ned-rammeværk, undersøger brugbarheden og relevansen af flere testmetoder til AI-modeller udviklet til air interface-området. Metoderne identificeres gennem research og afprøves på to AI-modeller, der er valgt som repræsentative for mulige, virkelige anvendelser. To hovedmetoder vurderes: Monte Carlo dropout og Fast Gradient Sign Method (FGSM) til test af modstandsdygtighed over for adversariske angreb. Monte Carlo dropout – hvor modellen køres flere gange med dropout aktiveret for at estimere usikkerhed – viste sig nyttig til at vurdere, hvor effektivt modellerne er konstrueret, men selve usikkerhedsestimatet var ikke anvendeligt i denne sammenhæng. FGSM – som tilføjer små, målrettede forstyrrelser til input for at afsløre sårbarheder – var meget nyttig til at vise, om en model kan påvirkes af sådanne forstyrrelser og til at generere adversariske eksempler, der kan analyseres for at forstå modelkarakteristika yderligere. Der blev også set på andre metoder og testspor for at få mere indsigt. Samlet set understøtter resultaterne stærkt xpl[AI]ned-konceptet og giver et klarere billede af, hvilke testmetoder der er mest relevante for AI-modeller i air interface.
[This apstract has been rewritten with the help of AI based on the project's original abstract]
Keywords
AI ; Machine Learning ; 5G ; 6G ; Air Interface ; AI Testing ; Monte Carlo Dropout ; FGSM ; Explainability
