Advanced Framework for Programming and Controlling Multi-Robot Systems in Self-Driving Labs: Integrating Behavior Trees and Reinforcement Learning for Automated Lab Procedures
Authors
Jad Masri, Ibrahim Khedr ; Vasudavan Iyer, Adshya
Term
4. semester
Education
Publication year
2023
Submitted on
2023-06-02
Pages
91
Abstract
Robotter bruges i stigende grad i fabrikker og automatisering af laboratorier, hvilket skaber behov for billige løsninger, som ikke kræver ekspertprogrammering. Dette projekt udvikler et brugervenligt system til at definere og køre robotopgaver via intuitiv, visuel programmering. Systemet kombinerer adfærdstræer (Behavior Trees – et visuelt, trælignende skema, der organiserer handlinger og beslutninger i klare trin) og forstærkningslæring (Reinforcement Learning – en prøve‑og‑fejl‑metode, hvor styringen forbedres med feedback). Det hele er struktureret som et færdighedsbaseret system, hvor genbrugelige færdigheder sættes sammen til opgaver. Fokus er Material Acceleration Platforms (MAPs), især Self‑Driving Labs (SDLs), som sigter mod at køre laboratorieprocesser autonomt. Som case blev systemet afprøvet på et Matrix Production System (MPS) med shuttles og manipulatorer for at validere løsningen. Undersøgelsen pegede på barrierer for automatisering af laboratoriearbejde, herunder overførsel af opgaver mellem opstillinger og begrænsninger fra laboratoriets fysiske indretning. Prototypen kunne oprette og afvikle adfærdstræer på MPS’et. RL‑agenten kunne navigere i miljøer uden forhindringer, men havde vanskeligheder med flere forhindringer på grund af styrings‑ og adfærdstendenser. Systemet er et proof‑of‑concept, der kræver yderligere forbedringer før praktisk brug. Samlet set viser den nye kombination af adfærdstræer og forstærkningslæring til multi‑robotsystemer i MAPs lovende potentiale for at øge automatiseringen af labopgaver og materialeforskning.
Robots are increasingly used in factories and automated lab settings, creating a need for affordable programming methods that do not require expert skills. This project develops an easy‑to‑use system for defining and running robot tasks through intuitive, visual programming. The system combines Behavior Trees (a visual, tree‑like diagram that organizes actions and decisions into clear steps) and Reinforcement Learning (a trial‑and‑error method that improves control based on feedback). These are organized as a Skill‑Based System, where reusable skills are composed into tasks. The focus is on Material Acceleration Platforms (MAPs), especially Self‑Driving Labs (SDLs), which aim to operate laboratory processes autonomously. As a use case, the system was validated on a Matrix Production System (MPS) with shuttles and manipulators. The research identified barriers to lab automation, including task transfer between setups and constraints imposed by the lab layout. The prototype could create and execute Behavior Trees on the MPS. The Reinforcement Learning agent navigated correctly in obstacle‑free environments but struggled with multiple obstacles due to control and behavior tendencies. The system is a proof of concept and needs further improvements before practical deployment. Overall, the new combination of Behavior Trees and Reinforcement Learning for multi‑robot systems in MAPs shows promise for advancing lab automation and accelerating materials discovery.
[This summary has been rewritten with the help of AI based on the project's original abstract]
Keywords
Documents
