Visual summary of egocentric photostreams by representative keyframes

M. Bolanos, Ricard Mestre, E. Talavera, Xavier Giró-i-Nieto, Petia Radeva

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

29 Citations (Scopus)

Abstract

Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted by means of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the summaries.
Original languageEnglish
Title of host publication2015 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2015
Subtitle of host publicationTurin, Italy, June 29-July 3, 2015
PublisherIEEE
Number of pages6
ISBN (Print)978-1-4799-7079-7
DOIs
Publication statusPublished - 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Visual summary of egocentric photostreams by representative keyframes'. Together they form a unique fingerprint.

Cite this