October 11, 2019, by Hazel Sayers
I-CUBE is developing new methods to enable collaborative robots (co-bots) to learn in a more naturalistic manner, using sensors to interpret the actions, language and expressions of their human collaborators. Advanced algorithms for decision-making, combined with reinforcement learning techniques will enable more effective, productivity enhancing human-robot cooperation for shared tasks.
Our first demonstrator project will show how a small industrial co-bot (a Universal Robots UR5) can be directed to learn how to sort laundry in preparation for washing, according to the human collaborators’ preferences, as given by natural language and gesture. Computer vision and machine learning techniques will be integrated within the demonstrator for gesture recognition, as well as recognition of the colour of the cloths and the baskets in which to place the items of clothing.
We are currently preparing for our first study with the intention of capturing the language and gestures that humans use whilst directing a co-bot to sort laundry. To do this we will use a Wizard of Oz method where a human will fulfil the role of the co-bot ‘brain’ whilst being hidden from the participant. This will allow participants to express themselves naturally while the co-bot enacts their instructions correctly, or not. Errors in the co-bot’s responses are expected to elicit natural corrective reactions from the human. These natural language and gestures will provide a corpus for the co-bot to use in its learning as well as assist in improving the co-bots sense of its environment, objects in it and their relevance to its goals.
No comments yet, fill out a comment to be the first