Query-based extracting: how to support the answer?

W.E. Bosma

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

32 Downloads (Pure)

Abstract

Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.
Original languageUndefined
Title of host publicationDocument Understanding Conference 2006
Place of PublicationGaithersburg, MD, USA
PublisherNational Institute of Standards and Technology
Pages202-208
Number of pages7
ISBN (Print)not assigned
Publication statusPublished - 8 Jun 2006

Publication series

Name
PublisherNational Institute of Standards and Technology (NIST)
NumberPaper PMD-

Keywords

  • EWI-9290
  • METIS-248200
  • IR-66954

Cite this

Bosma, W. E. (2006). Query-based extracting: how to support the answer? In Document Understanding Conference 2006 (pp. 202-208). Gaithersburg, MD, USA: National Institute of Standards and Technology.
Bosma, W.E. / Query-based extracting: how to support the answer?. Document Understanding Conference 2006. Gaithersburg, MD, USA : National Institute of Standards and Technology, 2006. pp. 202-208
@inproceedings{c0dd37bfc9b64179a78da18c78be8d68,
title = "Query-based extracting: how to support the answer?",
abstract = "Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.",
keywords = "EWI-9290, METIS-248200, IR-66954",
author = "W.E. Bosma",
year = "2006",
month = "6",
day = "8",
language = "Undefined",
isbn = "not assigned",
publisher = "National Institute of Standards and Technology",
number = "Paper PMD-",
pages = "202--208",
booktitle = "Document Understanding Conference 2006",
address = "United States",

}

Bosma, WE 2006, Query-based extracting: how to support the answer? in Document Understanding Conference 2006. National Institute of Standards and Technology, Gaithersburg, MD, USA, pp. 202-208.

Query-based extracting: how to support the answer? / Bosma, W.E.

Document Understanding Conference 2006. Gaithersburg, MD, USA : National Institute of Standards and Technology, 2006. p. 202-208.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

TY - GEN

T1 - Query-based extracting: how to support the answer?

AU - Bosma, W.E.

PY - 2006/6/8

Y1 - 2006/6/8

N2 - Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.

AB - Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.

KW - EWI-9290

KW - METIS-248200

KW - IR-66954

M3 - Conference contribution

SN - not assigned

SP - 202

EP - 208

BT - Document Understanding Conference 2006

PB - National Institute of Standards and Technology

CY - Gaithersburg, MD, USA

ER -

Bosma WE. Query-based extracting: how to support the answer? In Document Understanding Conference 2006. Gaithersburg, MD, USA: National Institute of Standards and Technology. 2006. p. 202-208