The AXES project participated in the interactive instance search task (INS), the semantic indexing task (SIN) the multimedia event recounting task (MER), and the multimedia event detection task (MED) for TRECVid 2013. Our interactive INS focused this year on using classifiers trained at query time with positive examples collected from external search engines. Participants in our INS experiments were carried out by students and researchers at Dublin City University. Our best INS runs performed on par with the top ranked INS runs in terms of P@10 and P@30, and around the median in terms of mAP.
For SIN, MED and MER, we use systems based on state- of-the-art local low-level descriptors for motion, image, and sound, as well as high-level features to capture speech and text and the visual and audio stream respectively. The low-level descriptors were aggregated by means of Fisher vectors into high- dimensional video-level signatures, the high-level features are aggregated into bag-of-word histograms. Using these features we train linear classifiers, and use early and late-fusion to combine the different features. Our MED system achieved the best score of all submitted runs in the main track, as well as in the ad-hoc track.
This paper describes in detail our INS, MER, and MED systems and the results and findings of our experiments.
|Title of host publication||TREC Video Retrieval Evaluation Online Proceedings (TRECVid 2013)|
|Place of Publication||Gaithersburg, MD, USA|
|Number of pages||12|
|Publication status||Published - Dec 2013|
|Event||TREC Video Retrieval Evaluation, TRECVID 2013 - Gaithersburg, United States|
Duration: 19 Nov 2013 → 22 Nov 2013
|Name||TREC Video Retrieval Evaluation: TRECVID|
|Conference||TREC Video Retrieval Evaluation, TRECVID 2013|
|Period||19/11/13 → 22/11/13|