Abstract
Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multimodal information fusion and situation-aware and dynamic vision processing algorithms have created a common motivation across different research disciplines to utilize context as a key enabler of application-oriented vision systems design. Improved robustness, efficient use of sensing and computing resources, dynamic task assignment to different operating modules as well as adaptation to event and user behavior models are among the benefits a vision processing system can gain through the utilization of contextual information. The Workshop on Use of Context in Vision Processing (UCVP) aims to address the opportunities in incorporating contextual information in algorithm design for single or multi-camera vision systems, as well as systems in which vision is complemented with other sensing modalities, such as audio, motion, proximity, occupancy, and others.
Original language | English |
---|---|
Title of host publication | Workshop on Use of Context in Vision Processing (UCVP 2009) |
Subtitle of host publication | in conjunction with Eleventh International Conference on Multimodal Interfaces and Workshop on Machine Learning for Multi-modal Interaction (ICMI-MLMI 2009) |
Place of Publication | New York, NY |
Publisher | Association for Computing Machinery |
Pages | 1-3 |
Number of pages | 3 |
ISBN (Print) | 978-1-60558-691-5 |
DOIs | |
Publication status | Published - 5 Nov 2009 |
Event | Workshop on Use of Context in Vision Processing, UCVP 2009 - Cambridge, United States Duration: 5 Nov 2009 → 5 Nov 2009 |
Conference
Conference | Workshop on Use of Context in Vision Processing, UCVP 2009 |
---|---|
Abbreviated title | UCVP |
Country/Territory | United States |
City | Cambridge |
Period | 5/11/09 → 5/11/09 |
Keywords
- METIS-266455
- image/video content analysis
- driving context
- Machine Learning
- Contextual information
- IR-70415
- HMI-MI: MULTIMODAL INTERACTIONS
- EWI-17696
- visual gesture recognition
- statistical relational models
- smart homes
- human- human interaction
- context-driven event interpretation
- intelligent headlight control
- camera sensors
- Object recognition