Abstract
Blind natural video quality assessment (BVQA), also known as no-reference video quality assessment, is a highly active research topic. In our recent contribution titled "Blind Natural Video Quality Prediction via Statistical Temporal Features and Deep Spatial Features"published in ACM Multimedia 2020, we proposed a two-level video quality model employing statistical temporal features and spatial features extracted by a deep convolutional neural network (CNN) for this purpose. At the time of publishing, the proposed model (CNN-TLVQM) achieved state-of-the-art results in BVQA. In this paper, we describe the process of reproducing the published results by using CNN-TLVQM on two publicly available natural video quality datasets.
Original language | English |
---|---|
Title of host publication | MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia |
Publisher | Association for Computing Machinery, Inc |
Pages | 3622-3626 |
Number of pages | 5 |
ISBN (Electronic) | 9781450386517 |
DOIs | |
Publication status | Published - 17 Oct 2021 |
Event | 29th ACM International Conference on Multimedia, MM 2021 - Virtual, Online, China Duration: 20 Oct 2021 → 24 Oct 2021 |
Publication series
Name | MM: International Multimedia Conference |
---|
Conference
Conference | 29th ACM International Conference on Multimedia, MM 2021 |
---|---|
Country/Territory | China |
City | Virtual, Online |
Period | 20/10/21 → 24/10/21 |
Bibliographical note
Funding Information:This work was supported in part by Natural Science Foundation of China under grant 61772348, and in part by Guangdong "Pearl River Talent Recruitment Program" under Grant 2019ZT08X603.
Keywords
- convolutional neural network
- human visual system
- machine learning
- video quality assessment