Human observer confidence in image quality assessment

Ulrich Engelke, Anthony Maeder, Hans-Jurgen Zepernick

    Research output: Contribution to journalArticle

    7 Citations (Scopus)

    Abstract

    Mean opinion scores obtained from subjective quality assessment are widely used as a ground truth for the development of predictive quality models. The underlying variance between observer ratings is typically quantified using confidence intervals, which do not provide any direct insight into the underlying causes of the disagreement. For better understanding of human visual quality perception and to develop more accurate models, it is important to identify the factors that impact on the variations in quality ratings. This work considers one such factor: observer confidence. This consideration is motivated by the view that quality assessment is a difficult task and hence quality ratings are provided with varying levels of confidence. The first goal of this paper is to analyse the results of an experiment to determine association between observer confidence and image quality judgement. Secondly, models are developed that aim to predict mean observer confidence as a complementary measure to the widely used mean opinion scores. It is shown that there is indeed a strong interrelation between quality perception and confidence, resulting in predictive models of high accuracy.

    Original languageEnglish
    Pages (from-to)935-947
    Number of pages13
    JournalSIGNAL PROCESSING-IMAGE COMMUNICATION
    Volume27
    Issue number9
    DOIs
    Publication statusPublished - 2012

    Fingerprint Dive into the research topics of 'Human observer confidence in image quality assessment'. Together they form a unique fingerprint.

  • Cite this