Deriving Subgoals Using Network Distillation
Term
4. term
Education
Publication year
2021
Submitted on
2021-06-09
Pages
12
Abstract
Sparsely rewarded environments can be challenging for deep reinforcement learning to understand and even harder to master. Hierarchical reinforcement learning shows promising ways of constructing subgoals, that are more understandable to the agent. Subgoal construction is a slow process to do autonomously, we therefore propose a new method of finding and constructing subgoals. We present a more time-efficient comparison method for subgoal creation. We propose a novel distributed training framework to increase the throughput of the agent. The framework indicates increased data gathering but decreased learning compared to a non-distributed counterpart.
Keywords
Documents
