Autonomous Navigation through Reinforcement Learning in a Non-simulated Environment
Author
Müller, Maximilian
Term
4. term
Publication year
2016
Submitted on
2016-11-01
Abstract
Dette projekt undersøger, hvordan forstærkningslæring kan anvendes til autonom, kamerabaseret navigation i virkelige (ikke-simulerede) omgivelser med fokus på en generel, platformsuafhængig løsning. Arbejdet sammenligner centrale metoder—særligt Q-learning og neuroevolution af udvidede topologier (NEAT)—gennem en gennemgang af beslægtede projekter og en praktisk prototype: en mobil robot, der navigerer autonomt i et fysisk miljø. Projektet beskriver, hvordan synsbaserede tilstandsrepræsentationer kan kobles til beslutninger, og diskuterer emner som eksploration/udnyttelse, kontinuerte tilstands- og handlingsrum samt parameterjustering. I prototypen anvendes simuleret annealing til at finde hensigtsmæssige læringsparametre, og betydningen af en eksplorationsfunktion for robust adfærd vises. Studiet omfatter en ikke-simuleret testprocedure og parameterafprøvning; detaljerede kvantitative resultater fremgår ikke af uddraget, men prototypen demonstrerer, at RL-baseret, synsdrevet navigation kan realiseres uden for simulation.
This project examines how reinforcement learning can be used for autonomous, camera-based navigation in real (non-simulated) environments, aiming for a general, platform-agnostic solution. It compares key methods—especially Q-learning and neuroevolution of augmented topologies (NEAT)—through a review of related work and a practical prototype: a mobile robot that navigates autonomously in a physical setting. The work outlines how vision-derived state representations can drive decisions and discusses exploration–exploitation, continuous state and action spaces, and parameter tuning. In the prototype, simulated annealing is used to identify effective learning parameters, and the importance of an exploration function for robust behavior is demonstrated. The study includes a non-simulated testing procedure and parameter tests; while detailed quantitative results are not provided in this excerpt, the prototype indicates that RL-based, vision-driven navigation can be achieved outside simulation.
[This summary has been generated with the help of AI directly from the project (PDF)]
Documents
