Abstract
Accurate interpretation of the environment is both essential to automated robots and highly beneficial for teleoperated robots. Going beyond obstacle recognition, interpreting the semantics of the environment and the actions it affords, enables robots to interact with environments made for humans in a human-like manner. This paper describes the collection of affordance labels at the object level for multiple indoors datasets, to train computer vision algorithms for detecting object affordances in indoor spaces. It is a first step towards determining high level “semantic” affordances, to allow reasoning about what to do with objects, rather than “functional” affordances, which allow reasoning about how to use the same. A baseline model is provided, which highlights the value of the affordance labels in a variety of robotics applications.
| Original language | English |
|---|---|
| Number of pages | 6 |
| Publication status | Published - 25 Aug 2025 |
| Event | International Conference on Robot and Human Interactive Communication, RO-MAN 2025 - Eindhoven, Netherlands Duration: 25 Aug 2025 → 29 Aug 2025 |
Conference
| Conference | International Conference on Robot and Human Interactive Communication, RO-MAN 2025 |
|---|---|
| Abbreviated title | RO-MAN 2025 |
| Country/Territory | Netherlands |
| City | Eindhoven |
| Period | 25/08/25 → 29/08/25 |