AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Leveraging Human Intuition for Learning Optimal Grasps for Objects: Human-Robot Collaboration

Translated title

Leveraging Human Intuition for Learning Optimal Grasps for Objects

Author

Term

4. semester

Education

Publication year

2023

Submitted on

Pages

47

Abstract

Samarbejdende robotter kan aflaste medarbejdere i værksteder og øge produktiviteten, men der er stadig behov for forbedringer, før flere har lyst til at arbejde sammen med dem. Ved simple pluk-og-placer-opgaver er robotter især udfordret i at vælge det rigtige greb til forskellige objekter, noget mennesker ofte gør intuitivt. Denne afhandling præsenterer en metode, der udnytter den menneskelige arbejders intuition til at lære robotten, hvilket greb der skal bruges til forskellige objekter. En konvolutionel neuralt netværk (CNN), en billedbaseret dybelæringsmodel, er trænet til at klassificere mellem 6 forskellige greb med en nøjagtighed på 87.22%. Objekter genkendes ved hjælp af ORB-funktioner, en metode til at finde karakteristiske billedtræk, med en nøjagtighed på 56.61%. Metoden er afprøvet med 9 forsøgspersoner på en UR3-robot. Deltagernes vurdering af komfort og tillid til systemet gav en score på 5.89 på en 7-punkts Likert-skala, hvilket indikerer interesse for denne form for samarbejde med en robot.

Collaborative robots can reduce physical strain in workshops and boost productivity, but they still need improvements for workers to want to use them. In simple pick-and-place tasks, robots are less effective than humans at choosing the right grasp for different objects, something people often do intuitively. This thesis presents a method that leverages a human worker’s intuition to teach the robot which grasp to use for different objects. A Convolutional Neural Network (CNN), a vision-based deep learning model, was trained to classify 6 different grasps with an accuracy of 87.22%. Objects are recognized using ORB features, a technique for detecting distinctive image patterns, with an accuracy of 56.61%. The method was tested with 9 participants using a UR3 robot. Participants rated their comfort and trust in the system at 5.89 on a 7-point Likert scale, indicating interest in collaborating with a robot in this way.

[This summary has been rewritten with the help of AI based on the project's original abstract]