ACRyLIQ: Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

Abstract

Crowdsourcing has emerged as a powerful paradigm for dealing with data using a large number of people. For instance, crowdsourcing has been successfully employed for quality assessment and improvement of Linked Data. A major challenge of Linked Data quality assessment with crowdsourcing is the cold-start problem: how to estimate the reliability of crowd workers and assign the most reliable workers to tasks? We address this challenge by proposing a novel approach for generating test questions from DBpedia, a general knowledge base, based on topics that de ne the domain of the tasks. We then use these test questions to approximate the reliability of the workers. Subsequently, the tasks are dynamically assigned to reliable workers to help improve the accuracy of collected responses. Our proposed approach, ACRyLIQ, is evaluated using workers hired from Amazon Mechanical Turk, on two real-world datasets with tasks for Linked Data quality assessment. We validate our proposed approach in terms of accuracy and compare it against the baseline approach of reliability approximation using gold-standard tasks. The results demonstrate that our proposed approach achieves high accuracy without the need of gold-standard tasks.
Original languageEnglish (Ireland)
Title of host publication20th International Conference on Knowledge Engineering and Knowledge Management (EKAW2016)
Publication statusPublished - 1 Jan 2016

Authors (Note for portal: view the doc link for the full list of authors)

  • Authors
  • ul Hassan, Umair;Zaveri, Amrapali;Marx, Edgard;Curry, Edward;Lehmann, Jens

Fingerprint

Dive into the research topics of 'ACRyLIQ: Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment'. Together they form a unique fingerprint.

Cite this