Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.
|Title of host publication||Document Understanding Conference 2006|
|Place of Publication||Gaithersburg, MD, USA|
|Publisher||National Institute of Standards and Technology|
|Number of pages||7|
|ISBN (Print)||not assigned|
|Publication status||Published - 8 Jun 2006|
|Publisher||National Institute of Standards and Technology (NIST)|