Efficient Online Crowdsourcing with Complex Annotations

Reshef Meir, Viet An Nguyen, Xu Chen, Jagdish Ramakrishnan, Udi Weinsberg

Research output: Contribution to journalConference articlepeer-review

Abstract

Crowdsourcing platforms use various truth discovery algorithms to aggregate annotations from multiple labelers. In an online setting, however, the main challenge is to decide whether to ask for more annotations for each item to efficiently trade off cost (i.e., the number of annotations) for quality of the aggregated annotations. In this paper, we propose a novel approach for general complex annotation (such as bounding boxes and taxonomy paths), that works in an online crowdsourcing setting. We prove that the expected average similarity of a labeler is linear in their accuracy conditional on the reported label. This enables us to infer reported label accuracy in a broad range of scenarios. We conduct extensive evaluations on real-world crowdsourcing data from Meta and show the effectiveness of our proposed online algorithms in improving the cost-quality trade-off.

Original languageEnglish
Pages (from-to)10119-10127
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number9
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Efficient Online Crowdsourcing with Complex Annotations'. Together they form a unique fingerprint.

Cite this