Probabilistic Data Integration

Maurice van Keulen

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

35 Citations (Scopus)
41 Downloads (Pure)

Abstract

In data integration efforts such as in portal development, much development time is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates or solve other semantic conflicts. It proofs impossible, however, to automatically get rid of all semantic problems. An often-used rule of thumb states that about 90% of the development effort is devoted to semi-automatically resolving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that strives for a 'good enough' initial integration which stores any remaining semantic uncertainty and conflicts in a probabilistic XML database. The remaining cases are to be resolved during use with user feedback. We conducted extensive experiments on the effects and sensitivity of rule denition, threshold tuning, and user feedback on the integration quality. We claim that our approach indeed reduces development effort - and not merely shifts the effort - by showing that setting rough safe thresholds and defining only a few rules suffices to produce a 'good enough' integration that can be meaningfully used, and that user feedback is effective in gradually improving the integration quality.
Original languageEnglish
Title of host publication08421 Abstracts Collection - Uncertainty Management in Information Systems
EditorsChristoph Koch, Birgitta König-Ries, Volker Markl, Maurice van Keulen
Place of PublicationDagstuhl, Germany
PublisherSchloss Dagstuhl - Leibniz-Zentrum fuer Informatik
Pages8-8
Number of pages1
Publication statusPublished - Mar 2009
EventUncertainty Management in Information Systems: Dagstuhl Seminar 08421 - Dagstuhl, Germany, Dagstuhl, Germany
Duration: 12 Oct 200817 Oct 2008

Publication series

NameDagstuhl Seminar Proceedings
PublisherSchloss Dagstuhl - Leibniz-Zentrum fuer Informatik
Number08421
ISSN (Print)1862-4405

Workshop

WorkshopUncertainty Management in Information Systems
CountryGermany
CityDagstuhl
Period12/10/0817/10/08
Other12 - 17 Oct 2008

Fingerprint

Data integration
Semantics
Feedback
XML
Tuning
Experiments

Keywords

  • probabilistic databases
  • EWI-15237
  • Uncertainty management
  • METIS-265199
  • data quality
  • IR-65438
  • Data Integration
  • entity resolution

Cite this

van Keulen, M. (2009). Probabilistic Data Integration. In C. Koch, B. König-Ries, V. Markl, & M. van Keulen (Eds.), 08421 Abstracts Collection - Uncertainty Management in Information Systems (pp. 8-8). (Dagstuhl Seminar Proceedings; No. 08421). Dagstuhl, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik.
van Keulen, Maurice. / Probabilistic Data Integration. 08421 Abstracts Collection - Uncertainty Management in Information Systems. editor / Christoph Koch ; Birgitta König-Ries ; Volker Markl ; Maurice van Keulen. Dagstuhl, Germany : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2009. pp. 8-8 (Dagstuhl Seminar Proceedings; 08421).
@inproceedings{f035c895a6314f95b3eb3b69626b2422,
title = "Probabilistic Data Integration",
abstract = "In data integration efforts such as in portal development, much development time is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates or solve other semantic conflicts. It proofs impossible, however, to automatically get rid of all semantic problems. An often-used rule of thumb states that about 90{\%} of the development effort is devoted to semi-automatically resolving the remaining 10{\%} hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that strives for a 'good enough' initial integration which stores any remaining semantic uncertainty and conflicts in a probabilistic XML database. The remaining cases are to be resolved during use with user feedback. We conducted extensive experiments on the effects and sensitivity of rule denition, threshold tuning, and user feedback on the integration quality. We claim that our approach indeed reduces development effort - and not merely shifts the effort - by showing that setting rough safe thresholds and defining only a few rules suffices to produce a 'good enough' integration that can be meaningfully used, and that user feedback is effective in gradually improving the integration quality.",
keywords = "probabilistic databases, EWI-15237, Uncertainty management, METIS-265199, data quality, IR-65438, Data Integration, entity resolution",
author = "{van Keulen}, Maurice",
note = "http://eprints.ewi.utwente.nl/15237",
year = "2009",
month = "3",
language = "English",
series = "Dagstuhl Seminar Proceedings",
publisher = "Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik",
number = "08421",
pages = "8--8",
editor = "Christoph Koch and Birgitta K{\"o}nig-Ries and Volker Markl and {van Keulen}, Maurice",
booktitle = "08421 Abstracts Collection - Uncertainty Management in Information Systems",
address = "Germany",

}

van Keulen, M 2009, Probabilistic Data Integration. in C Koch, B König-Ries, V Markl & M van Keulen (eds), 08421 Abstracts Collection - Uncertainty Management in Information Systems. Dagstuhl Seminar Proceedings, no. 08421, Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, pp. 8-8, Uncertainty Management in Information Systems, Dagstuhl, Germany, 12/10/08.

Probabilistic Data Integration. / van Keulen, Maurice.

08421 Abstracts Collection - Uncertainty Management in Information Systems. ed. / Christoph Koch; Birgitta König-Ries; Volker Markl; Maurice van Keulen. Dagstuhl, Germany : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2009. p. 8-8 (Dagstuhl Seminar Proceedings; No. 08421).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

TY - GEN

T1 - Probabilistic Data Integration

AU - van Keulen, Maurice

N1 - http://eprints.ewi.utwente.nl/15237

PY - 2009/3

Y1 - 2009/3

N2 - In data integration efforts such as in portal development, much development time is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates or solve other semantic conflicts. It proofs impossible, however, to automatically get rid of all semantic problems. An often-used rule of thumb states that about 90% of the development effort is devoted to semi-automatically resolving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that strives for a 'good enough' initial integration which stores any remaining semantic uncertainty and conflicts in a probabilistic XML database. The remaining cases are to be resolved during use with user feedback. We conducted extensive experiments on the effects and sensitivity of rule denition, threshold tuning, and user feedback on the integration quality. We claim that our approach indeed reduces development effort - and not merely shifts the effort - by showing that setting rough safe thresholds and defining only a few rules suffices to produce a 'good enough' integration that can be meaningfully used, and that user feedback is effective in gradually improving the integration quality.

AB - In data integration efforts such as in portal development, much development time is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates or solve other semantic conflicts. It proofs impossible, however, to automatically get rid of all semantic problems. An often-used rule of thumb states that about 90% of the development effort is devoted to semi-automatically resolving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that strives for a 'good enough' initial integration which stores any remaining semantic uncertainty and conflicts in a probabilistic XML database. The remaining cases are to be resolved during use with user feedback. We conducted extensive experiments on the effects and sensitivity of rule denition, threshold tuning, and user feedback on the integration quality. We claim that our approach indeed reduces development effort - and not merely shifts the effort - by showing that setting rough safe thresholds and defining only a few rules suffices to produce a 'good enough' integration that can be meaningfully used, and that user feedback is effective in gradually improving the integration quality.

KW - probabilistic databases

KW - EWI-15237

KW - Uncertainty management

KW - METIS-265199

KW - data quality

KW - IR-65438

KW - Data Integration

KW - entity resolution

M3 - Conference contribution

T3 - Dagstuhl Seminar Proceedings

SP - 8

EP - 8

BT - 08421 Abstracts Collection - Uncertainty Management in Information Systems

A2 - Koch, Christoph

A2 - König-Ries, Birgitta

A2 - Markl, Volker

A2 - van Keulen, Maurice

PB - Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik

CY - Dagstuhl, Germany

ER -

van Keulen M. Probabilistic Data Integration. In Koch C, König-Ries B, Markl V, van Keulen M, editors, 08421 Abstracts Collection - Uncertainty Management in Information Systems. Dagstuhl, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik. 2009. p. 8-8. (Dagstuhl Seminar Proceedings; 08421).