Abstract
A fundamental goal of computer vision is to discover the semantic information within a given scene, commonly referred to as scene understanding. The overall goal is to find a mapping to derive semantic information from sensor data, which is an extremely challenging task, partially due to the ambiguities in the appearance of the data. However, the majority of the scene understanding tasks tackled so far are mainly involving visual modalities only. In this book, we aim at providing an overview of recent advances in algorithms and applications that involve multiple sources of information for scene understanding. In this context, deep learning models are particularly suitable for combining multiple modalities and, as a matter of fact, many contributions are dealing with such architectures to take benefit of all data streams and obtain optimal performances. We conclude this book’s introduction by a concise description of the rest of the chapters therein contained. They are focused at providing an understanding of the state-of-the-art, open problems, and future directions related to multimodal scene understanding as a scientific discipline.
Original language | English |
---|---|
Title of host publication | Multimodal Scene Understanding |
Subtitle of host publication | Algorithms, Applications and Deep Learning |
Editors | Michael Ying Yang, Bodo Rosenhahn, Vittotio Murino |
Publisher | Elsevier |
Chapter | 1 |
Pages | 1-7 |
Number of pages | 7 |
ISBN (Print) | 978-0-12-817358-9 |
DOIs | |
Publication status | Published - 2 Aug 2019 |
Keywords
- 2021 OA procedure
- Deep learning
- Multimodality
- Scene understanding
- Computer vision