Collecting Object-level Affordance for RGBD Datasets

Research output: Contribution to conferencePaperpeer-review

Abstract

Accurate interpretation of the environment is both essential to automated robots and highly beneficial for teleoperated robots. Going beyond obstacle recognition, interpreting the semantics of the environment and the actions it affords, enables robots to interact with environments made for humans in a human-like manner. This paper describes the collection of affordance labels at the object level for multiple indoors datasets, to train computer vision algorithms for detecting object affordances in indoor spaces. It is a first step towards determining high level “semantic” affordances, to allow reasoning about what to do with objects, rather than “functional” affordances, which allow reasoning about how to use the same. A baseline model is provided, which highlights the value of the affordance labels in a variety of robotics applications.
Original languageEnglish
Number of pages6
Publication statusPublished - 25 Aug 2025
EventInternational Conference on Robot and Human Interactive Communication, RO-MAN 2025 - Eindhoven, Netherlands
Duration: 25 Aug 202529 Aug 2025

Conference

ConferenceInternational Conference on Robot and Human Interactive Communication, RO-MAN 2025
Abbreviated titleRO-MAN 2025
Country/TerritoryNetherlands
CityEindhoven
Period25/08/2529/08/25

Fingerprint

Dive into the research topics of 'Collecting Object-level Affordance for RGBD Datasets'. Together they form a unique fingerprint.

Cite this