AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Integration and development of a collaborative robot assisting in assembly of dummy phones

Author

Term

4. semester

Education

Publication year

2023

Submitted on

Abstract

This thesis investigates how a previously simulated, language-driven robot system can be integrated on a physical collaborative robot to assist with the assembly of dummy phones. The core question is how a robotic picking system can be integrated on a physical robot and used to support assembly tasks. Building on a Named Entity Recognition approach to extract task and object information from full-sentence instructions, the system combines visual perception using background subtraction, blob cropping, and feature extraction with MobileNetV2 (trained first for classification and then via triplet learning) against a grounding database for object handling and learning new objects. A Franka robot and an Intel RealSense camera provide the physical platform, while color detection is used to assess assembly progress and suggest the next component so the system can fetch relevant parts. The prototype demonstrates the ability to pick up items on instruction and to support assembly by identifying the next part based on the state of the process. Detailed performance results are not provided in the excerpt, but test cases are outlined for task extraction, NER object extraction, error recovery, movement and grasping, and assembly progress recognition.

Denne afhandling undersøger, hvordan et tidligere simuleret, sprogdrevet robotsystem kan integreres på en fysisk samarbejdende robot for at assistere i montage af dummy-telefoner. Hovedspørgsmålet er, hvordan et robotisk pickingsystem kan integreres på en fysisk robot og bruges til at støtte montageopgaver. Systemet bygger videre på en NER-baseret tilgang til at udtrække opgave- og objektinformation fra hele sætninger, kombineret med visuel perception, hvor baggrundssubtraktion, blob-udskæring og feature-ekstraktion med MobileNetV2 (trænet først til klassifikation og dernæst med triplet learning) anvendes mod en grounding-database til objekthåndtering og læring af nye objekter. En Franka-robot og et Intel RealSense-kamera udgør den fysiske platform, mens farvedetektion anvendes til at vurdere montagens fremskridt og foreslå næste komponent, så systemet kan hente relevante dele. Prototypen demonstrerer, at robotten kan samle genstande op på instruks formuleret som hele sætninger og understøtte montagen ved at identificere næste del ud fra montageens tilstand. Uddraget angiver ikke detaljerede præstationsresultater, men skitserer testcases for opgaveudtræk, NER-objektudtræk, fejlgendannelse, bevægelse og gribning samt fremskridtsgenkendelse i montage.

[This apstract has been generated with the help of AI directly from the project full text]