Every Shot Counts: Using Exemplars for Repetition Counting in Videos

Saptarshi Sinha, Alexandros Stergiou, Dima Damen

Research output: Working paperPreprintAcademic

147 Downloads (Pure)

Abstract

Video repetition counting infers the number of repetitions of recurring actions or motion within a video. We propose an exemplar-based approach that discovers visual correspondence of video exemplars across repetitions within target videos. Our proposed Every Shot Counts (ESCounts) model is an attention-based encoder-decoder that encodes videos of varying lengths alongside exemplars from the same and different videos. In training, ESCounts regresses locations of high correspondence to the exemplars within the video. In tandem, our method learns a latent that encodes representations of general repetitive motions, which we use for exemplar-free, zero-shot inference. Extensive experiments over commonly used datasets (RepCount, Countix, and UCFRep) showcase ESCounts obtaining state-of-the-art performance across all three datasets. Detailed ablations further demonstrate the effectiveness of our method.
Original languageEnglish
PublisherArXiv.org
Publication statusAccepted/In press - 13 Oct 2024

Fingerprint

Dive into the research topics of 'Every Shot Counts: Using Exemplars for Repetition Counting in Videos'. Together they form a unique fingerprint.

Cite this