Duplicate Detection in Probabilistic Data

Fabian Panse, Maurice van Keulen, Ander de Keijzer, Norbert Ritter

Research output: Book/ReportReportProfessional

210 Downloads (Pure)

Abstract

Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused on the integration of certain source data (relational or XML). There is no work on the integration of uncertain (esp. probabilistic) source data so far. In this paper, we present a first step towards a concise consolidation of probabilistic data. We focus on duplicate detection as a representative and essential step in an integration process. We present techniques for identifying multiple probabilistic representations of the same real-world entities. Furthermore, for increasing the efficiency of the duplicate detection process we introduce search space reduction methods adapted to probabilistic data.
Original languageEnglish
Place of PublicationEnschede
PublisherCentre for Telematics and Information Technology (CTIT)
Number of pages8
Publication statusPublished - Dec 2009

Publication series

NameCTIT Technical Report Series
PublisherCentre for Telematics and Information Technology, University of Twente
No.TR-CTIT-09-44
ISSN (Print)1381-3625

Keywords

  • DB-SDI: SCHEMA AND DATA INTEGRATION

Fingerprint

Dive into the research topics of 'Duplicate Detection in Probabilistic Data'. Together they form a unique fingerprint.

Cite this