In this paper we address the problem of activity detection in unsegmented image sequences. Our main contribution is the use of an implicit representation of the spatiotemporal shape of the activity which relies on the spatiotemporal localization of characteristic ensembles of feature descriptors. Evidence for the spatiotemporal localization of the activity is accumulated in a probabilistic spatiotemporal voting scheme. We use boosting in order to select characteristic ensembles per class. This leads to a set of class specific codebooks where each codeword is an ensemble of features. During training, we store the spatial positions of the codeword ensembles with respect to a set of reference points, and their temporal positions with respect to the start and end of the action instance. During testing, each activated codeword casts votes concerning the spatiotemporal position and extend of the action, using the information stored during training. Mean Shift mode estimation in the voting space provides the most probable hypotheses concerning the localization of the subjects at each frame, as well as the extend of the activities depicted in the image sequences. We present experimental results for a number of publicly available datasets, that demonstrate the efficiency of the proposed method in localizing and classifying human activities.
- Action Detection
- HMI-MI: MULTIMODAL INTERACTIONS
- Space-time Voting
- EC Grant Agreement nr.: FP7/231287