Visual object detection for mobile road sign inventory

Lucas Paletta, Lucas Paletta, Andreas Jeitler*, Evelyn Hödl, Jean-P. Andreu, Patrick Luley, Alexander Almer

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

10 Citations (Scopus)


For road sign inventory and maintenance, we propose to use a mobile system based on a handheld device, GPS sensor, a camera, and a standard mobile GIS software. Camera images are then analysed via object recognition algorithms which results in an automated detection, i.e., localisation and classification of the signs. We present here the localisation of points and regions of interest, the fitting of geometrical constraints to the extracted set of interest points, and the matching of content information from the visual information within the sign plate. From the preliminary operational state of the vision based road sign detection system we conclude that the selected methodology is efficient enough to achieve the requested high quality in object detection and classification.

Original languageEnglish
Title of host publicationMobile Human-Computer Interaction
Subtitle of host publicationMobile HCI 2004
EditorsStephen Brewster, Mark Dunlop
Number of pages5
Publication statusPublished - 1 Dec 2004
Externally publishedYes
Event6th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI 2004 - University of Strathclyde, Glasgow, United Kingdom
Duration: 13 Sept 200416 Sept 2004
Conference number: 6

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
NameLecture Notes in Artificial Intelligence
NameLecture Notes in Bioinformatics


Conference6th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI 2004
Abbreviated titleMobileHCI 2004
Country/TerritoryUnited Kingdom


Dive into the research topics of 'Visual object detection for mobile road sign inventory'. Together they form a unique fingerprint.

Cite this