Cognition and statistical-based crowd evaluation framework for ER-in-house crowdsourcing system: Inbound contact center

Morteza Saberi, Omar Khadeer Hussain, Naeem Khalid Janjua, Elizabeth Chang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

Entity identification and resolution has been a hot topic in computer science from last three decades. The ever increasing amount of data and data quality issues such as duplicate records pose great challenge to organizations to efficiently and effectively perform their business operations such as customer relationship management, marketing, contact centers management etc. Recently, crowdsourcing technique has been used to improve the accuracy of entity resolution that make use of human intelligence to label the data and make it ready for further processing by entity resolution (ER) algorithms. However, labelling of data by humans is an error prone process that affects the process of entity resolution and eventually overall performance of crowd. Thus controlling the quality of labeling task is an essential for crowdsourcing systems. However, this task becomes more challenging due to unavailability of ground data. In this paper, we address the above mentioned challenge and design and develop framework for evaluating performance of ER-In-house crowdsourcing system using cognition and statistical-based techniques. Our methodology is divided into two phases namely before-hand evaluation and in-process evaluation. In before-hand evaluation a cognitive approach is used to filter out workers with an inappropriate cognitive style for ER-labeling task. To this end, analytic hierarchy process (AHP) is used to classify the existing four primary cogitative styles discussed in the literature either as suitable or not-suitable for labelling task under consideration. To control the quality of work by crowd-workers, we extend and use the statistical approach proposed by Joglekar et al. during second phase i.e. in-process evaluation. To illustrate effectiveness of our approach; we have considered the domain of Inbound Contact Center and using Customer Service Representatives (CSRs) knowledge for ER-labeling task. In the proposed ER-In-house crowdsourcing system CSRs are considered as crowd-workers. Synthetic dataset is used to demonstrate the applicability of the proposed cognition and statistical-based CSRs evaluation approaches.

Original languageEnglish
Title of host publicationDatabases Theory and Applications
Subtitle of host publication26th Australasian Database Conference, ADC 2015, Proceedings
EditorsMohamed A. Sharaf, Muhammad A. Cheema, Jianzhong Qi
Place of PublicationSwitzerland
PublisherSpringer, Cham
Pages207-219
Number of pages13
ISBN (Electronic)978-3-319-19548-3
ISBN (Print)978-3-319-19547-6
DOIs
Publication statusPublished - 27 May 2015
Externally publishedYes
Event26th Australasian Database Conference - Melbourne, Australia
Duration: 4 Jun 20157 Jun 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9093
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference26th Australasian Database Conference
Abbreviated titleADC 2015
Country/TerritoryAustralia
CityMelbourne
Period4/06/157/06/15

Keywords

  • Cognitive styles
  • Contact centers
  • Crowd evaluation
  • Entity resolution

Fingerprint

Dive into the research topics of 'Cognition and statistical-based crowd evaluation framework for ER-in-house crowdsourcing system: Inbound contact center'. Together they form a unique fingerprint.

Cite this