Abstract
Human learning strongly depends on the ability to structure the actions of teachers in order to identify relevant parts. We propose that this is also true for learning in robots. Therefore, we apply a method for multimodal action segmentation called Acoustic Packaging to a corpus of pairs of users teaching object names to a robot. Going beyond previous use cases, we analyze how the structure of human actions changes if the robot is learning quickly or slowly. Our results reveal differences between action structuring in the conditions such as longer utterances and more motion when the robot learns slowly. We also evaluate how the partners in the pair influence each other's action structuring. The results show a strong correlation between the participants in the pairs, even more so in the trials where the robot is learning slowly. We conclude that the action structuring based on Acoustic Packaging allows robots to differentiate how well the interaction with multiple users is going and is, thus, a vehicle for feedback generation.
Original language | Undefined |
---|---|
Title of host publication | Proceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication, IEEE RO-MAN 2013 |
Place of Publication | USA |
Publisher | IEEE ROBOTICS AND AUTOMATION SOCIETY |
Pages | 490-495 |
Number of pages | 6 |
ISBN (Print) | 1944-9445 |
DOIs | |
Publication status | Published - 26 Aug 2013 |
Event | 2013 22nd IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2013 - Gyeongju Hilton Hotel, Gyengju, Korea, Republic of Duration: 26 Aug 2013 → 29 Aug 2013 Conference number: 22 |
Publication series
Name | |
---|---|
Publisher | IEEE Robotics and Automation Society |
ISSN (Print) | 1944-9445 |
Conference
Conference | 2013 22nd IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2013 |
---|---|
Abbreviated title | RO-MAN |
Country/Territory | Korea, Republic of |
City | Gyengju |
Period | 26/08/13 → 29/08/13 |
Keywords
- HMI-IA: Intelligent Agents
- HMI-MI: MULTIMODAL INTERACTIONS
- IR-89707
- METIS-302884
- EWI-24539