Learning skeleton representations for human action recognition

Alessia Saggese*, Nicola Strisciuglio, Mario Vento, Nicolai Petkov

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

15 Citations (Scopus)
1 Downloads (Pure)

Abstract

Automatic interpretation of human actions gained strong interest among researchers in patter recognition and computer vision because of its wide range of applications, such as in social and home robotics, elderly people health care, surveillance, among others. In this paper, we propose a method for recognition of human actions by analysis of skeleton poses. The method that we propose is based on novel trainable feature extractors, which can learn the representation of prototype skeleton examples and can be employed to recognize skeleton poses of interest. We combine the proposed feature extractors with an approach for classification of pose sequences based on string kernels. We carried out experiments on three benchmark data sets (MIVIA-S, MSRSDA and MHAD) and the results that we achieved are comparable or higher than the ones obtained by other existing methods. A further important contribution of this work is the MIVIA-S dataset, that we collected and made publicly available.

Original languageEnglish
Pages (from-to)23-31
Number of pages9
JournalPattern recognition letters
Volume118
Early online date6 Mar 2018
DOIs
Publication statusPublished - Feb 2019
Externally publishedYes

Keywords

  • 41A05
  • 41A10
  • 65D05
  • 65D17

Fingerprint

Dive into the research topics of 'Learning skeleton representations for human action recognition'. Together they form a unique fingerprint.

Cite this