Fast multi-class distance transforms for video surveillance

Theo E. Schouten, Egon van den Broek

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

2 Citations (Scopus)
16 Downloads (Pure)

Abstract

A distance transformation (DT) takes a binary image as input and generates a distance map image in which the value of each pixel is its distance to a given set of object pixels in the binary image. In this research, DT’s for multi class data (MCDTs) are developed which generate both a distance map and a class map containing for each pixel the class of the closest object. Results indicate that the MCDT based on the Fast Exact Euclidean Distance (FEED) method is a factor 2 tot 4 faster than MCDTs based on exact or semi-exact euclidean distance (ED) transformations, and is only a factor 2 to 4 slower than the MCDT based on the crude city-block approximation of the ED. In the second part of this research, the MCDTs were adapted such that they could be used for the fast generation of distance and class maps for video sequences. The frames of the sequences contain a number of fixed objects and a moving object, where each object has a separate label. Results show that the FEED based version is a factor 2 to 3.5 faster than the fastest of all the other video-MCDTs which is based on the chamfer 3,4 distance measure. FEED is even a factor 3.5 to 10 faster than another fast exact ED transformation. With video, multi class FEED it will be possible to measure distances from a moving object to various identified stationary objects with nearly the frame rate of a webcam. This will be very useful when the risk exists that objects move outside surveillance limits.
Original languageUndefined
Title of host publicationProceedings of SPIE (Real-Time Image Processing)
EditorsNasser Kehtarnavaz, Matthias F. Carlsohn
Place of PublicationBellingham, WA, USA
PublisherSPIE - The International Society for Optical Engineering
Pages681107
Number of pages11
ISBN (Print)9780819469830
DOIs
Publication statusPublished - 28 Jan 2008

Publication series

NameProceedings of SPIE
PublisherSPIE - The International Society for Optical Engineering
Volume6811
ISSN (Print)0277-786X

Keywords

  • METIS-252708
  • Distance maps
  • Classification
  • IR-58740
  • multi class data
  • HMI-VRG: Virtual Reality and Graphics
  • HMI-CI: Computational Intelligence
  • EWI-21107
  • video surveillance
  • Fast Exact Euclidean Distance (FEED)

Cite this

Schouten, T. E., & van den Broek, E. (2008). Fast multi-class distance transforms for video surveillance. In N. Kehtarnavaz, & M. F. Carlsohn (Eds.), Proceedings of SPIE (Real-Time Image Processing) (pp. 681107). (Proceedings of SPIE; Vol. 6811). Bellingham, WA, USA: SPIE - The International Society for Optical Engineering. https://doi.org/10.1117/12.766408
Schouten, Theo E. ; van den Broek, Egon. / Fast multi-class distance transforms for video surveillance. Proceedings of SPIE (Real-Time Image Processing). editor / Nasser Kehtarnavaz ; Matthias F. Carlsohn. Bellingham, WA, USA : SPIE - The International Society for Optical Engineering, 2008. pp. 681107 (Proceedings of SPIE).
@inproceedings{06d32f30987e4c529c320ac6165cffb4,
title = "Fast multi-class distance transforms for video surveillance",
abstract = "A distance transformation (DT) takes a binary image as input and generates a distance map image in which the value of each pixel is its distance to a given set of object pixels in the binary image. In this research, DT’s for multi class data (MCDTs) are developed which generate both a distance map and a class map containing for each pixel the class of the closest object. Results indicate that the MCDT based on the Fast Exact Euclidean Distance (FEED) method is a factor 2 tot 4 faster than MCDTs based on exact or semi-exact euclidean distance (ED) transformations, and is only a factor 2 to 4 slower than the MCDT based on the crude city-block approximation of the ED. In the second part of this research, the MCDTs were adapted such that they could be used for the fast generation of distance and class maps for video sequences. The frames of the sequences contain a number of fixed objects and a moving object, where each object has a separate label. Results show that the FEED based version is a factor 2 to 3.5 faster than the fastest of all the other video-MCDTs which is based on the chamfer 3,4 distance measure. FEED is even a factor 3.5 to 10 faster than another fast exact ED transformation. With video, multi class FEED it will be possible to measure distances from a moving object to various identified stationary objects with nearly the frame rate of a webcam. This will be very useful when the risk exists that objects move outside surveillance limits.",
keywords = "METIS-252708, Distance maps, Classification, IR-58740, multi class data, HMI-VRG: Virtual Reality and Graphics, HMI-CI: Computational Intelligence, EWI-21107, video surveillance, Fast Exact Euclidean Distance (FEED)",
author = "Schouten, {Theo E.} and {van den Broek}, Egon",
year = "2008",
month = "1",
day = "28",
doi = "10.1117/12.766408",
language = "Undefined",
isbn = "9780819469830",
series = "Proceedings of SPIE",
publisher = "SPIE - The International Society for Optical Engineering",
pages = "681107",
editor = "Nasser Kehtarnavaz and Carlsohn, {Matthias F.}",
booktitle = "Proceedings of SPIE (Real-Time Image Processing)",

}

Schouten, TE & van den Broek, E 2008, Fast multi-class distance transforms for video surveillance. in N Kehtarnavaz & MF Carlsohn (eds), Proceedings of SPIE (Real-Time Image Processing). Proceedings of SPIE, vol. 6811, SPIE - The International Society for Optical Engineering, Bellingham, WA, USA, pp. 681107. https://doi.org/10.1117/12.766408

Fast multi-class distance transforms for video surveillance. / Schouten, Theo E.; van den Broek, Egon.

Proceedings of SPIE (Real-Time Image Processing). ed. / Nasser Kehtarnavaz; Matthias F. Carlsohn. Bellingham, WA, USA : SPIE - The International Society for Optical Engineering, 2008. p. 681107 (Proceedings of SPIE; Vol. 6811).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Fast multi-class distance transforms for video surveillance

AU - Schouten, Theo E.

AU - van den Broek, Egon

PY - 2008/1/28

Y1 - 2008/1/28

N2 - A distance transformation (DT) takes a binary image as input and generates a distance map image in which the value of each pixel is its distance to a given set of object pixels in the binary image. In this research, DT’s for multi class data (MCDTs) are developed which generate both a distance map and a class map containing for each pixel the class of the closest object. Results indicate that the MCDT based on the Fast Exact Euclidean Distance (FEED) method is a factor 2 tot 4 faster than MCDTs based on exact or semi-exact euclidean distance (ED) transformations, and is only a factor 2 to 4 slower than the MCDT based on the crude city-block approximation of the ED. In the second part of this research, the MCDTs were adapted such that they could be used for the fast generation of distance and class maps for video sequences. The frames of the sequences contain a number of fixed objects and a moving object, where each object has a separate label. Results show that the FEED based version is a factor 2 to 3.5 faster than the fastest of all the other video-MCDTs which is based on the chamfer 3,4 distance measure. FEED is even a factor 3.5 to 10 faster than another fast exact ED transformation. With video, multi class FEED it will be possible to measure distances from a moving object to various identified stationary objects with nearly the frame rate of a webcam. This will be very useful when the risk exists that objects move outside surveillance limits.

AB - A distance transformation (DT) takes a binary image as input and generates a distance map image in which the value of each pixel is its distance to a given set of object pixels in the binary image. In this research, DT’s for multi class data (MCDTs) are developed which generate both a distance map and a class map containing for each pixel the class of the closest object. Results indicate that the MCDT based on the Fast Exact Euclidean Distance (FEED) method is a factor 2 tot 4 faster than MCDTs based on exact or semi-exact euclidean distance (ED) transformations, and is only a factor 2 to 4 slower than the MCDT based on the crude city-block approximation of the ED. In the second part of this research, the MCDTs were adapted such that they could be used for the fast generation of distance and class maps for video sequences. The frames of the sequences contain a number of fixed objects and a moving object, where each object has a separate label. Results show that the FEED based version is a factor 2 to 3.5 faster than the fastest of all the other video-MCDTs which is based on the chamfer 3,4 distance measure. FEED is even a factor 3.5 to 10 faster than another fast exact ED transformation. With video, multi class FEED it will be possible to measure distances from a moving object to various identified stationary objects with nearly the frame rate of a webcam. This will be very useful when the risk exists that objects move outside surveillance limits.

KW - METIS-252708

KW - Distance maps

KW - Classification

KW - IR-58740

KW - multi class data

KW - HMI-VRG: Virtual Reality and Graphics

KW - HMI-CI: Computational Intelligence

KW - EWI-21107

KW - video surveillance

KW - Fast Exact Euclidean Distance (FEED)

U2 - 10.1117/12.766408

DO - 10.1117/12.766408

M3 - Conference contribution

SN - 9780819469830

T3 - Proceedings of SPIE

SP - 681107

BT - Proceedings of SPIE (Real-Time Image Processing)

A2 - Kehtarnavaz, Nasser

A2 - Carlsohn, Matthias F.

PB - SPIE - The International Society for Optical Engineering

CY - Bellingham, WA, USA

ER -

Schouten TE, van den Broek E. Fast multi-class distance transforms for video surveillance. In Kehtarnavaz N, Carlsohn MF, editors, Proceedings of SPIE (Real-Time Image Processing). Bellingham, WA, USA: SPIE - The International Society for Optical Engineering. 2008. p. 681107. (Proceedings of SPIE). https://doi.org/10.1117/12.766408