Human-made query-based summaries commonly contain information not explicitly asked for. They answer the user query, but also provide supporting information. In order to find this information in the source text, a graph is used to model the strength and type of relations between sentences of the query and document cluster, based on various features. The resulting extracts rank second in overall readability in the DUC 2006 evaluation. Employment of better question answering methods is the key to improve also content-based evaluation results.
|Publisher||National Institute of Standards and Technology (NIST)|
|Conference||Document Understanding Conference 2006, New York, USA|
|City||Gaithersburg, MD, USA|
|Period||8/06/06 → …|