Abstract
The performance of human-robot collaboration tasks can be improved by incorporating predictions of the human collaborator's movement intentions. These predictions allow a collaborative robot to both provide appropriate assistance and plan its own motion so it does not interfere with the human. In the specific case of human reach intent prediction, prior work has divided the task into two pieces: recognition of human activities and prediction of reach intent. In this work, we propose a joint model for simultaneous recognition of human activities and prediction of reach intent based on skeletal pose. Since future reach intent is tightly linked to the action a person is performing at present, we hypothesize that this joint model will produce better performance on the recognition and prediction tasks than past approaches. In addition, our approach incorporates a simple human kinematic model which allows us to generate features that compactly capture the reachability of objects in the environment and the motion cost to reach those objects, which we anticipate will improve performance. Experiments using the CAD-120 benchmark dataset show that both the joint modeling approach and the human kinematic features give improved F1 scores versus the previous state of the art.
Original language | English |
---|---|
Number of pages | 7 |
DOIs | |
Publication status | Published - 1 Dec 2016 |
Externally published | Yes |
Event | 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016 - Daejeon Convention Center, Daejeon, Korea, Republic of Duration: 9 Oct 2016 → 14 Oct 2016 http://www.iros2016.org/ |
Conference
Conference | 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016 |
---|---|
Abbreviated title | IROS |
Country/Territory | Korea, Republic of |
City | Daejeon |
Period | 9/10/16 → 14/10/16 |
Internet address |
Keywords
- n/a OA procedure