Automating the construction of scene classifiers for content-based video retrieval

L. Khan (Editor), Menno Israël, V.A. Petrushin (Editor), Egon van den Broek, Peter van der Putten

    Research output: Contribution to conferencePaperAcademicpeer-review

    21 Downloads (Pure)

    Abstract

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering.
    Original languageUndefined
    Pages38-47
    Number of pages10
    Publication statusPublished - 22 Aug 2004
    EventFifth International Workshop on Multimedia Data Mining (MDM/KDD'04) - Seattle, WA, USA
    Duration: 22 Aug 200422 Aug 2004

    Workshop

    WorkshopFifth International Workshop on Multimedia Data Mining (MDM/KDD'04)
    Period22/08/0422/08/04
    Other22 Aug 2004

    Keywords

    • HMI-MR: MULTIMEDIA RETRIEVAL
    • scene
    • visual alphabets
    • EWI-21118
    • HMI-CI: Computational Intelligence
    • Visual perception
    • IR-79285
    • Automation
    • Classification
    • Content-based video retrieval
    • HMI-HF: Human Factors

    Cite this