Blind Natural Image Quality Prediction Using Convolutional Neural Networks and Weighted Spatial Pooling

Yicheng Su, Jari Korhonen*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

10 Citations (Scopus)

Abstract

Typically, some regions of an image are more relevant for its perceived quality than the others. On the other hand, subjective image quality is also affected by low level characteristics, such as sensor noise and sharpness. This is why image rescaling, as often used in object recognition, is not a feasible approach for producing input images for convolutional neural networks (CNN) used for blind image quality prediction. Generally, convolution layer can accept images of arbitrary resolution as input, whereas fully connected (FC) layer only can accept a fixed length feature vector. To solve this problem, we propose weighted spatial pooling (WSP) to aggregate spatial information of any size of weight map, which can be used to replace global average pooling (GAP). In this paper, we present a blind image quality assessment (BIQA) method based on CNN and WSP. Our experimental results show that the prediction accuracy of the proposed method is competitive against the state-of-the-art image quality assessment methods.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Image Processing (ICIP)
PublisherIEEE Explore
Pages191-195
Number of pages5
ISBN (Print)978-172816395-6
DOIs
Publication statusPublished - 2020
Externally publishedYes
EventIEEE International Conference on Image Processing (ICIP) -
Duration: 25 Sept 202028 Sept 2020

Conference

ConferenceIEEE International Conference on Image Processing (ICIP)
Period25/09/2028/09/20

Keywords

  • Image quality assessment
  • Convolutional neural network
  • Visual perception

Fingerprint

Dive into the research topics of 'Blind Natural Image Quality Prediction Using Convolutional Neural Networks and Weighted Spatial Pooling'. Together they form a unique fingerprint.

Cite this