Abstract
Recent advances in lifelogging, mainly due to fast development of wearable cameras, made it possible to continuously capture moments from our life from a first-person point of view. Extracting and re-experimenting moments illustrated by autobiographic images is of special interest in order to stimulate episodic memory of patients with neurodegenerative diseases (Alzheimer, mild cognitive impairment, etc.). Using a wearable camera, it is possible to generate a huge amount of images captured on a daily basis (around 2000 images per day on a 30 s time-lapse mode). Since not all images obtained are valuable and semantically rich, there is a need for efficient and scalable techniques to separate the wheat from the chaff, that is. to extract egocentric images that are semantically rich enough and not redundant in order to use them for memory stimulation. By using state-of-the-art retrieval systems based on convolutional neural network features obtained from these rich, filtered egocentric images, we show how to cope with those requirements and apply the filtered images within a memory stimulation program specially developed to improve memory of patients with Mild Cognitive Impairment.
Original language | English |
---|---|
Title of host publication | Multimodal Behavior Analysis in the Wild |
Subtitle of host publication | Advances and Challenges |
Editors | Xavier Alameda-Pineda, Elisa Ricci, Nicu Sebe |
Publisher | Elsevier |
Chapter | 7 |
Pages | 135-158 |
Number of pages | 24 |
ISBN (Print) | 978-0-12-814601-9 |
DOIs | |
Publication status | Published - 2019 |
Externally published | Yes |
Keywords
- Lifelogging
- Egocentric vision
- Content-based image retrieval
- CNN
- Mild cognitive impairment