On the Behaviour of Information Measures for Test Selection

D. Sent, L.C. van der Gaag

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

tests that are expected to yield the largest decrease in the uncertainty about a patient’s diagnosis. For capturing diagnostic uncertainty, often an information measure is used. In this paper, we study the Shannon entropy, the Gini index, and the misclassification error for this purpose. We argue that for a large range of values, the first derivative of the Gini index can be regarded as an approximation of the first derivative of the Shannon entropy. We also argue that the differences between the derivative functions outside this range can explain different test sequences in practice. We further argue that the misclassification error is less suited for test-selection purposes as it is likely to show a tendency to select tests arbitrarily. Experimental results from using the measures with a real-life probabilistic network in oncology support our observations.
Original languageUndefined
Title of host publicationProceedings of Artifical Intelligence in Medicine Europe (AIME)
EditorsR Bellazzi, A Abu-Hanna, J Hunter
Place of PublicationBerlin
PublisherSpringer
Pages316-326
Number of pages11
DOIs
Publication statusPublished - 2007
Event11th Conference on Artificial Intelligence in Medicine, AIME 2007 - Amsterdam, Netherlands
Duration: 7 Jul 200711 Jul 2007
Conference number: 11

Publication series

NameLecture Notes in Artificial Intelligence
PublisherSpringer Verlag
NumberLNCS4549
Volume4594
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference11th Conference on Artificial Intelligence in Medicine, AIME 2007
Abbreviated titleAIME
CountryNetherlands
CityAmsterdam
Period7/07/0711/07/07

Keywords

  • EWI-10747
  • METIS-241770
  • IR-61840
  • misclassification error
  • Shannon entropy
  • Test selection
  • Gini index

Cite this

Sent, D., & van der Gaag, L. C. (2007). On the Behaviour of Information Measures for Test Selection. In R. Bellazzi, A. Abu-Hanna, & J. Hunter (Eds.), Proceedings of Artifical Intelligence in Medicine Europe (AIME) (pp. 316-326). (Lecture Notes in Artificial Intelligence; Vol. 4594, No. LNCS4549). Berlin: Springer. https://doi.org/10.1007/978-3-540-73599-1_42
Sent, D. ; van der Gaag, L.C. / On the Behaviour of Information Measures for Test Selection. Proceedings of Artifical Intelligence in Medicine Europe (AIME). editor / R Bellazzi ; A Abu-Hanna ; J Hunter. Berlin : Springer, 2007. pp. 316-326 (Lecture Notes in Artificial Intelligence; LNCS4549).
@inproceedings{8859e286916f4703949a56242ec4c392,
title = "On the Behaviour of Information Measures for Test Selection",
abstract = "tests that are expected to yield the largest decrease in the uncertainty about a patient’s diagnosis. For capturing diagnostic uncertainty, often an information measure is used. In this paper, we study the Shannon entropy, the Gini index, and the misclassification error for this purpose. We argue that for a large range of values, the first derivative of the Gini index can be regarded as an approximation of the first derivative of the Shannon entropy. We also argue that the differences between the derivative functions outside this range can explain different test sequences in practice. We further argue that the misclassification error is less suited for test-selection purposes as it is likely to show a tendency to select tests arbitrarily. Experimental results from using the measures with a real-life probabilistic network in oncology support our observations.",
keywords = "EWI-10747, METIS-241770, IR-61840, misclassification error, Shannon entropy, Test selection, Gini index",
author = "D. Sent and {van der Gaag}, L.C.",
year = "2007",
doi = "10.1007/978-3-540-73599-1_42",
language = "Undefined",
series = "Lecture Notes in Artificial Intelligence",
publisher = "Springer",
number = "LNCS4549",
pages = "316--326",
editor = "R Bellazzi and A Abu-Hanna and J Hunter",
booktitle = "Proceedings of Artifical Intelligence in Medicine Europe (AIME)",

}

Sent, D & van der Gaag, LC 2007, On the Behaviour of Information Measures for Test Selection. in R Bellazzi, A Abu-Hanna & J Hunter (eds), Proceedings of Artifical Intelligence in Medicine Europe (AIME). Lecture Notes in Artificial Intelligence, no. LNCS4549, vol. 4594, Springer, Berlin, pp. 316-326, 11th Conference on Artificial Intelligence in Medicine, AIME 2007, Amsterdam, Netherlands, 7/07/07. https://doi.org/10.1007/978-3-540-73599-1_42

On the Behaviour of Information Measures for Test Selection. / Sent, D.; van der Gaag, L.C.

Proceedings of Artifical Intelligence in Medicine Europe (AIME). ed. / R Bellazzi; A Abu-Hanna; J Hunter. Berlin : Springer, 2007. p. 316-326 (Lecture Notes in Artificial Intelligence; Vol. 4594, No. LNCS4549).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - On the Behaviour of Information Measures for Test Selection

AU - Sent, D.

AU - van der Gaag, L.C.

PY - 2007

Y1 - 2007

N2 - tests that are expected to yield the largest decrease in the uncertainty about a patient’s diagnosis. For capturing diagnostic uncertainty, often an information measure is used. In this paper, we study the Shannon entropy, the Gini index, and the misclassification error for this purpose. We argue that for a large range of values, the first derivative of the Gini index can be regarded as an approximation of the first derivative of the Shannon entropy. We also argue that the differences between the derivative functions outside this range can explain different test sequences in practice. We further argue that the misclassification error is less suited for test-selection purposes as it is likely to show a tendency to select tests arbitrarily. Experimental results from using the measures with a real-life probabilistic network in oncology support our observations.

AB - tests that are expected to yield the largest decrease in the uncertainty about a patient’s diagnosis. For capturing diagnostic uncertainty, often an information measure is used. In this paper, we study the Shannon entropy, the Gini index, and the misclassification error for this purpose. We argue that for a large range of values, the first derivative of the Gini index can be regarded as an approximation of the first derivative of the Shannon entropy. We also argue that the differences between the derivative functions outside this range can explain different test sequences in practice. We further argue that the misclassification error is less suited for test-selection purposes as it is likely to show a tendency to select tests arbitrarily. Experimental results from using the measures with a real-life probabilistic network in oncology support our observations.

KW - EWI-10747

KW - METIS-241770

KW - IR-61840

KW - misclassification error

KW - Shannon entropy

KW - Test selection

KW - Gini index

U2 - 10.1007/978-3-540-73599-1_42

DO - 10.1007/978-3-540-73599-1_42

M3 - Conference contribution

T3 - Lecture Notes in Artificial Intelligence

SP - 316

EP - 326

BT - Proceedings of Artifical Intelligence in Medicine Europe (AIME)

A2 - Bellazzi, R

A2 - Abu-Hanna, A

A2 - Hunter, J

PB - Springer

CY - Berlin

ER -

Sent D, van der Gaag LC. On the Behaviour of Information Measures for Test Selection. In Bellazzi R, Abu-Hanna A, Hunter J, editors, Proceedings of Artifical Intelligence in Medicine Europe (AIME). Berlin: Springer. 2007. p. 316-326. (Lecture Notes in Artificial Intelligence; LNCS4549). https://doi.org/10.1007/978-3-540-73599-1_42