Robot Learning from Demonstration (LfD) of complex tasks is usually performed in the context of Hierarchical Task Network (HTN) or Markov Decision Processes. While these approaches can be successful, it only allows the robot to reason in terms of sequential planning. The selected candidate will investigate a new method for LfD problems, which models the human demonstrations using dynamic behavior trees. This new paradigm would allow a robot to reason not only in terms of task accomplishment, but also in terms of sensory motor couplings to implement behaviors at runtime. The resulting reactivity would increase the robustness and liveliness of the robot.
The student will develop a framework extending the Playful scripting language for learning behavior trees. After an initial phase using synthetic demonstrations the student will develop an approach for learning demonstrations using data recorded using the OptiTrack motion capture and Motive body tracker present in our Lab in Stuttgart. The learned behavior will be tested on the baxter robot, which already integrates the Playful framework. The selected candidate must have excellent working knowledge of machine learning and python, as well as a motivation to test his work on a real working robot. A good knowledge of ROS is a plus.
This master thesis will be based between the Machine Learning and Robotics Lab of the University of Stuttgart and the Max Planck Institute for Intelligent Systems in Tuebingen and jointly supervised by Dr. Jim Mainprice and Dr. Vincent Berenz.
Machine Learning and Robotics Laboratory (University of Stuttgart)
Humans to Robots Motion Research Group (University of Stuttgart)
Max Planck Institute for Intelligent Systems
Dr. Jim Mainprice (Humans to Robots Motion Research Group)
Dr. Vincent Berenz (Max Planck Institute for Intelligent Systems)