Practical Video Quality Assessment of User Generated Content

Jari Korhonen, Xuanzheng Wen, Jun Cheng, Xu Wang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

Abstract

During the past few years, video quality assessment (VQA) of user generated content (UGC) has attracted considerable attention in the research community. In this paper, we propose a practical architecture for a versatile video quality model, designed for assessing user generated videos in particular. The proposed architecture is based on our earlier design of two-level video quality model with a convolutional neural network (CNN-TLVQM), with various improvements and re-designed elements. We have built a fast implementation of the proposed model in C++, demonstrating that the model is practical for real-life applications. The implementation of the model has been submitted for evaluation in ICME UGCVQA Challenge in 2021.

Original languageEnglish
Title of host publication2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665449892
DOIs
Publication statusPublished - 21 Jun 2021
Externally publishedYes
Event2021 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2021 - Shenzhen, China
Duration: 5 Jul 20219 Jul 2021

Publication series

NameIEEE International Conference on Multimedia and Expo Workshops
PublisherIEEE
ISSN (Print)2330-7927

Conference

Conference2021 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2021
Country/TerritoryChina
CityShenzhen
Period5/07/219/07/21

Keywords

  • Convolutional neural network
  • Recurrent neural network
  • User generated content
  • Video quality assessment

Fingerprint

Dive into the research topics of 'Practical Video Quality Assessment of User Generated Content'. Together they form a unique fingerprint.

Cite this