TU Berlin

Quality and Usability LabReviewed Journal Papers

Inhalt des Dokuments

zur Navigation

Reviewed Journal Papers

go back to overview

NDNetGaming-development of a no-reference deep CNN for gaming video quality prediction
Zitatschlüssel utke2020a
Autor Utke, Markus and Zadtootaghaj, Saman and Schmidt, Steven and Bosse, Sebastian and Möller, Sebastian
Seiten 1–23
Jahr 2020
ISSN 1573-7721
DOI 10.1007/s11042-020-09144-6
Adresse Address of the Publisher and (NOT the conference)
Journal Multimedia Tools and Applications
Monat jul
Verlag Springer
Wie herausgegeben Fullpaper
Zusammenfassung Gaming video streaming services are growing rapidly due to new services such as passive video streaming of gaming content, e.g. Twitch.tv, as well as cloud gaming, e.g. Nvidia GeForce NOW and Google Stadia. In contrast to traditional video content, gaming content has special characteristics such as extremely high and special motion patterns, synthetic content and repetitive content, which poses new opportunities for the design of machine learning-based models to outperform the state-of-the-art video and image quality approaches for this special computer generated content. In this paper, we train a Convolutional Neural Network (CNN) based on an objective quality model, VMAF, as ground truth and fine-tuned it based on subjective image quality ratings. In addition, we propose a new temporal pooling method to predict gaming video quality based on frame-level predictions. Finally, the paper also describes how an appropriate CNN architecture can be chosen and how well the model performs on different contents. Our result shows that among four popular network architectures that we investigated, DenseNet performs best for image quality assessment based on the training dataset. By training the last 57 convolutional layers of DenseNet based on VMAF values, we obtained a high performance model to predict VMAF of distorted frames of video games with a Spearman’s Rank correlation (SRCC) of 0.945 and Root Mean Score Error (RMSE) of 7.07 on the image level, while achieving a higher performance on the video level leading to a SRCC of 0.967 and RMSE of 5.47 for the KUGVD dataset. Furthermore, we fine-tuned the model based on subjective quality ratings of images from gaming content which resulted in a SRCC of 0.93 and RMSE of 0.46 using one-hold-out cross validation. Finally, on the video level, using the proposed pooling method, the model achieves a very good performance indicated by a SRCC of 0.968 and RMSE of 0.30 for the used gaming video dataset.
Link zur Publikation Link zur Originalpublikation Download Bibtex Eintrag

go back to overview



Schnellnavigation zur Seite über Nummerneingabe