Abstract
Due to the wide range of different natural temporal and spatial distortions appearing in user generated video content, blind assessment of natural video quality is a challenging research problem. In this study, we combine the hand-crafted statistical temporal features used in a state-of-the-art video quality model and spatial features obtained from convolutional neural network trained for image quality assessment via transfer learning. Experimental results on two recently published natural video quality databases show that the proposed model can predict subjective video quality more accurately than the publicly available video quality models representing the state-of-the-art. The proposed model is also competitive in terms of computational complexity.
Original language | English |
---|---|
Title of host publication | MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia |
Publisher | Association for Computing Machinery, Inc |
Pages | 3311-3319 |
Number of pages | 9 |
ISBN (Electronic) | 9781450379885 |
DOIs | |
Publication status | Published - 12 Oct 2020 |
Event | 28th ACM International Conference on Multimedia, MM 2020 - Virtual, Online, United States Duration: 12 Oct 2020 → 16 Oct 2020 |
Conference
Conference | 28th ACM International Conference on Multimedia, MM 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 12/10/20 → 16/10/20 |
Bibliographical note
Funding Information:This work was supported in part by Natural Science Foundation of China under grant 61772348.
Publisher Copyright:
© 2020 ACM.
Keywords
- convolutional neural network
- human visual system
- machine learning
- video quality assessment