Predicting Personal Preferences in Subjective Video Quality Assessment

Jari Korhonen*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

1 Citation (Scopus)

Abstract

In this paper, we study the problem of predicting the visual quality of a specific test sample (e.g. a video clip) experienced by a specific user, based on the ratings by other users for the same sample and the same user for other samples. A simple linear model and algorithm is presented, where the characteristics of each test sample are represented by a set of parameters, and the individual preferences are represented by weights for the parameters. According to the validation experiment performed on public visual quality databases annotated with raw individual scores, the proposed model can predict the scores by individuals more accurately than the average score for the respective sample computed from the scores given by other individuals. In many cases, the proposed algorithm also outperforms more generic Parametric Probabilistic Matrix Factorization (PPMF) technique developed for collaborative filtering in recommendation systems.

Original languageEnglish
Title of host publication2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX)
PublisherIEEE Explore
Number of pages6
ISBN (Electronic)9781538640241
DOIs
Publication statusPublished - 30 Jun 2017
Event9th International Conference on Quality of Multimedia Experience, QoMEX 2017 - Erfurt, Germany
Duration: 29 May 20172 Jun 2017

Conference

Conference9th International Conference on Quality of Multimedia Experience, QoMEX 2017
Country/TerritoryGermany
CityErfurt
Period29/05/172/06/17

Bibliographical note

Publisher Copyright:
© 2017 IEEE.

Keywords

  • subjective quality assessment
  • individual characteristics
  • collaborative filtering

Fingerprint

Dive into the research topics of 'Predicting Personal Preferences in Subjective Video Quality Assessment'. Together they form a unique fingerprint.

Cite this