Deep Reinforcement Learning for Robotic Grasping from Octrees: Learning Manipulation from Compact 3D Observations
Studenteropgave: Kandidatspeciale og HD afgangsprojekt

- Andrej Orsula
4. semester, Robotteknologi (cand.polyt.), Kandidat (Kandidatuddannelse)
This work investigates the applicability of deep reinforcement learning for vision-based robotic grasping of diverse objects from compact octree observations. A novel simulation environment with photorealistic rendering and domain randomisation is created and employed to train agents by the use of model-free off-policy actor-critic algorithms. Inside this environment, agent learns end-to-end policy that directly maps 3D visual observations to continuous actions. Feature extractor in form of 3D convolutional neural network is trained alongside actor-critic networks in order to extract abstract features from a set of stacked octrees. To this end, a policy trained with octree observations is able to achieve successful grasps in novel scenes with previously unseen objects, material textures and random camera poses. Experimental evaluation indicates that 3D data representations provide advantages over traditionally used 2D RGB and 2.5D RGB-D image observations. Furthermore, sim-to-real transfer was successfully applied in order to evaluate an agent trained inside simulation on a real robot without any need for retraining.
Sprog | Engelsk |
---|---|
Udgivelsesdato | 3 jun. 2021 |
Antal sider | 69 |
Billeder

Examples of grasps achieved by sim-to-real transfer of a policy trained inside simulation.