AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Federated Interference Management for Industrial 6G Subnetworks

Translated title

Federeret Interferensstyring for Industrielle 6G Subnetværk

Author

Term

4. term

Publication year

2023

Submitted on

Pages

100

Abstract

6G "in‑X" subnetworks are very small, low‑power cells for short ranges, built to deliver extremely high data rates, very low delay (latency), and high reliability. In dense deployments, their signals can interfere with each other and limit performance. Recent work tackles this by letting cells learn to share radio resources using multi‑agent reinforcement learning—a trial‑and‑error method where several agents learn to make decisions over time. Existing approaches train either centrally or in a fully distributed way. Centralized training can learn faster by using data from all subnetworks, but it requires sharing measurements with a central server, raising privacy and security concerns. Distributed training keeps data local, but each agent sees only its own measurements, which can make learning unstable and slow to converge. To balance these trade‑offs, we propose a client‑server, horizontal federated reinforcement learning framework in which each subnetwork trains a local model and shares only model updates (weights) with a server that aggregates them—so knowledge is shared without exposing raw data. Simulations in an industrial setting using 3GPP propagation models indicate fast convergence, small performance gains, and robustness when the environment changes over time (non‑stationarity).

6G "in‑X"-undernet er meget små lav-effekt celler til kort rækkevidde, designet til at levere ekstremt høje datahastigheder, meget lav forsinkelse (latenstid) og høj pålidelighed. I tætte installationer kan deres signaler forstyrre hinanden og dermed begrænse ydeevnen. Nyere forskning adresserer dette ved at lade cellerne lære at dele radioressourcer med multi‑agent forstærkningslæring, en metode hvor flere agenter lærer gennem forsøg og fejl at træffe beslutninger over tid. Hidtidige løsninger trænes enten centralt eller fuldt distribueret. Central træning kan lære hurtigere, fordi den bruger data fra alle undernet, men kræver deling af målinger med en central server og rejser derfor privatlivs- og sikkerhedsproblemer. Distribueret træning holder data lokalt, men hver agent ser kun sine egne målinger, hvilket kan gøre læringen ustabil og langsom til at konvergere. For at balancere disse kompromiser foreslår vi en klient‑server, horisontal federeret forstærkningslæringsramme, hvor hvert undernet træner en lokal model og kun deler modelopdateringer (vægte) med en server, som aggregerer dem—så viden deles uden at afsløre rå data. Simulationer i et industrielt miljø med 3GPP udbredelsesmodeller viser hurtig konvergens, små forbedringer i ydeevnen samt robusthed, når omgivelserne ændrer sig over tid (ikke‑stationaritet).

[This apstract has been rewritten with the help of AI based on the project's original abstract]