Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models

Michael Ying Yang, Wentong Liao, Yanpeng Cao (Corresponding Author), Bodo Rosenhahn

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
8 Downloads (Pure)

Abstract

In this paper, we present an unsupervised learning framework for analyzing activities and interactions in surveillance videos. In our framework, three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model: low-level visual features, simple atomic activities, and multi-agent interactions. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented as distribution of atomic activities. This learning process is unsupervised. Given a training video sequence, low-level visual features are extracted based on optic flow and then clustered into different atomic activities and video clips are clustered into different interactions. The HDP model automatically decides the number of clusters, i.e., the categories of atomic activities and interactions. Based on the learned atomic activities and interactions, a training dataset is generated to train the Gaussian Process (GP) classifier. Then, the trained GP models work in newly captured video to classify interactions and detect abnormal events in real time. Furthermore, the temporal dependencies between video events learned by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier to enhance the accuracy of the classification in newly captured videos. Our framework couples the benefits of the generative model (HDP) with the discriminant model (GP). We provide detailed experiments showing that our framework enjoys favorable performance in video event classification in real-time in a crowded traffic scene.

Original languageEnglish
Pages (from-to)203-214
Number of pages12
JournalPhotogrammetric engineering and remote sensing : PE&RS
Volume84
Issue number4
DOIs
Publication statusPublished - 1 Apr 2018

Fingerprint

anomaly
Classifiers
Unsupervised learning
Hidden Markov models
learning
Optics
detection
video
train
Experiments
experiment
distribution

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Cite this

@article{54a95c77a02d41ee88a38c23908829e4,
title = "Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models",
abstract = "In this paper, we present an unsupervised learning framework for analyzing activities and interactions in surveillance videos. In our framework, three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model: low-level visual features, simple atomic activities, and multi-agent interactions. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented as distribution of atomic activities. This learning process is unsupervised. Given a training video sequence, low-level visual features are extracted based on optic flow and then clustered into different atomic activities and video clips are clustered into different interactions. The HDP model automatically decides the number of clusters, i.e., the categories of atomic activities and interactions. Based on the learned atomic activities and interactions, a training dataset is generated to train the Gaussian Process (GP) classifier. Then, the trained GP models work in newly captured video to classify interactions and detect abnormal events in real time. Furthermore, the temporal dependencies between video events learned by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier to enhance the accuracy of the classification in newly captured videos. Our framework couples the benefits of the generative model (HDP) with the discriminant model (GP). We provide detailed experiments showing that our framework enjoys favorable performance in video event classification in real-time in a crowded traffic scene.",
keywords = "ITC-ISI-JOURNAL-ARTICLE",
author = "Yang, {Michael Ying} and Wentong Liao and Yanpeng Cao and Bodo Rosenhahn",
year = "2018",
month = "4",
day = "1",
doi = "10.14358/PERS.84.4.203",
language = "English",
volume = "84",
pages = "203--214",
journal = "Photogrammetric engineering and remote sensing : PE&RS",
issn = "0099-1112",
publisher = "American Society for Photogrammetry and Remote Sensing",
number = "4",

}

Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models. / Yang, Michael Ying; Liao, Wentong; Cao, Yanpeng (Corresponding Author); Rosenhahn, Bodo.

In: Photogrammetric engineering and remote sensing : PE&RS, Vol. 84, No. 4, 01.04.2018, p. 203-214.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models

AU - Yang, Michael Ying

AU - Liao, Wentong

AU - Cao, Yanpeng

AU - Rosenhahn, Bodo

PY - 2018/4/1

Y1 - 2018/4/1

N2 - In this paper, we present an unsupervised learning framework for analyzing activities and interactions in surveillance videos. In our framework, three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model: low-level visual features, simple atomic activities, and multi-agent interactions. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented as distribution of atomic activities. This learning process is unsupervised. Given a training video sequence, low-level visual features are extracted based on optic flow and then clustered into different atomic activities and video clips are clustered into different interactions. The HDP model automatically decides the number of clusters, i.e., the categories of atomic activities and interactions. Based on the learned atomic activities and interactions, a training dataset is generated to train the Gaussian Process (GP) classifier. Then, the trained GP models work in newly captured video to classify interactions and detect abnormal events in real time. Furthermore, the temporal dependencies between video events learned by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier to enhance the accuracy of the classification in newly captured videos. Our framework couples the benefits of the generative model (HDP) with the discriminant model (GP). We provide detailed experiments showing that our framework enjoys favorable performance in video event classification in real-time in a crowded traffic scene.

AB - In this paper, we present an unsupervised learning framework for analyzing activities and interactions in surveillance videos. In our framework, three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model: low-level visual features, simple atomic activities, and multi-agent interactions. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented as distribution of atomic activities. This learning process is unsupervised. Given a training video sequence, low-level visual features are extracted based on optic flow and then clustered into different atomic activities and video clips are clustered into different interactions. The HDP model automatically decides the number of clusters, i.e., the categories of atomic activities and interactions. Based on the learned atomic activities and interactions, a training dataset is generated to train the Gaussian Process (GP) classifier. Then, the trained GP models work in newly captured video to classify interactions and detect abnormal events in real time. Furthermore, the temporal dependencies between video events learned by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier to enhance the accuracy of the classification in newly captured videos. Our framework couples the benefits of the generative model (HDP) with the discriminant model (GP). We provide detailed experiments showing that our framework enjoys favorable performance in video event classification in real-time in a crowded traffic scene.

KW - ITC-ISI-JOURNAL-ARTICLE

UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2018/isi/yang_vid.pdf

U2 - 10.14358/PERS.84.4.203

DO - 10.14358/PERS.84.4.203

M3 - Article

VL - 84

SP - 203

EP - 214

JO - Photogrammetric engineering and remote sensing : PE&RS

JF - Photogrammetric engineering and remote sensing : PE&RS

SN - 0099-1112

IS - 4

ER -