Evaluating the performance of a neuroevolution algorithm against a reinforcement learning algorithm on a self-driving car
Author
Kovalsky, Kristián
Term
4. term
Education
Publication year
2020
Abstract
Mange maskinlæringsproblemer løses med gradientbaserede metoder, der justerer modelparametre trin for trin, som i forstærkningslæring (RL), der lærer ved forsøg og fejl. Dette projekt sammenligner sådan gradientbaseret RL med en ikke-gradientbaseret tilgang, neuroevolution, som udvikler neurale netværk via evolutionær søgning. Opgaven er at træne en bil til at køre selv rundt på baner med varierende kompleksitet. De kvantitative evalueringer viser, at neuroevolution kan finde brugbare løsninger meget hurtigt sammenlignet med RL. Når RL får lov at træne længe nok, lærer den dog modeller, der til sidst overgår dem, som neuroevolution frembringer. Yderligere statistisk analyse er nødvendig for at afgøre, om forskellene i ydeevne er signifikante.
Many machine learning problems are solved with gradient-based methods that adjust model parameters step by step, as in reinforcement learning (RL), which learns from trial and error. This project compares such gradient-based RL with a non-gradient approach, neuroevolution, which evolves neural networks using evolutionary search. The task is to train a car to drive itself around circuits that vary in complexity. Quantitative evaluation data show that neuroevolution can produce workable solutions very quickly compared with RL. However, when RL is trained for long enough, it learns models that eventually outperform those produced by neuroevolution. Further statistical analysis is needed to determine whether these performance differences are significant.
[This abstract was generated with the help of AI]
Keywords
Documents
