TU Berlin

Quality and Usability LabSaman Zadtootaghaj

Inhalt des Dokuments

zur Navigation

Saman Zadtootaghaj


Research Field
- Assessing and predicting QoE of Gaming Applications

 Research Topics

- Video quality assessment of computer generated content 

- Cloud Gaming Quality of Experience

- Deep learning-based quality assessment of image/video content

- Image/video quality enhancement  

Current Project:

Adaptive Edge/Cloud Compute and Network Continuum over a Heterogeneous Sparse Edge Infrastructure to Support Nextgen Applications (ACCORDION)

Past Project:

Methods and Models for assessing and predicting the QoE linked to Mobile Gaming (QoE-NET/MSCA-ITN Network)


Saman Zadtootaghaj is a researcher at the Quality and Usability Lab at Technische Universitat Berlin working on modeling the gaming quality of experience under the supervision of Prof. Dr.-Ing. Sebastian Moller. His main interest is subjective and objective quality assessment of Computer-Generated content. He received his bachelor degree from IASBS and master degree in information technology from University of Tehran.

He worked as a researcher at Telekom Innovation Laboratories of Deutsche Telekom AG from 2016 to 2018 as part of European project called QoE-Net. He is currently the chair of Computer-Generated Imagery group at Video Quality Expert Group.  


Chair of Computer-Generated Imagery (CGI) at VQEG 

Local coordinator of HCID track of EIT master program 

Visiting Researcher:

MMSPG lab, EPFL (2017)

LST group, DFKI (2019)

Teaching experience:

Advance Projects at Quality and Usability Lab (Deep Learning for Video Quality Assessment and Enhancement) SS2020

Usability engineering exercise SS2017/SS2018/SS2019/SS2020

Quality and Usability Seminar (Applied statistic) WS 2019-2020

Quality and Usability Seminar (Gamification) SS2018

Teacher assistant: Multiagent (University of Tehran 2014), computer networks (IASBS 2011).


VQEG meetings at Nokia, Madrid, March 2018

VQEG meetings at Google (remote), USA, November 2018 

VQEG Meetings at Deutsche Telekom, Germany, March 2019

VQEG meetings at Tencent, China, October 2019

VQEG meeting, Online Meeting, March 2020 

Involvement in Standardization Activities: 

Active in the following work items:

ITU-T P.BBQCG: Parametric bitstream-based Quality Assessment of Cloud Gaming Services

ITU-T G.CMVTQS: Computational model used as a QoE/QoS monitor to assess videotelephony services

ITU-T G.OMMOG: Opinion Model for Mobile Online Gaming applications

Contributed to the following recommendations:

ITU-T G.1032: Influence factors on gaming quality of experience  

ITU-T P.809: Subjective evaluation methods for gaming quality  

ITU-T G.1072: Opinion model predicting gaming quality of experience for cloud gaming services  

Reviewed papers for TCSVT, Quality and User Experience journal, Journal of Electronic Imaging,  QoMEX 2017-2019, ICC 2019 and ICME 2020, PQS workshop 2016


Tools for Quality Prediction of Gaming Content:

NDNetGaming: Deep Learning based Quality metric for Gaming Content

GamingPara: Gaming Parametric based Video Quality Models

Implementation of ITU-T Recommendation G.1072



GamingVideoSet: https://kingston.box.com/v/GamingVideoSET

Cloud Gaming Video Dataset: https://github.com/stootaghaj/CGVDS 

Image Gaming Quality Dataset: https://github.com/stootaghaj/GISET 


Find me on ResearchGate, LinkedIn, Scholar, GitHub.

Quality and Usability Lab
Deutsche Telekom Laboratories
TU Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin, Germany

Tel:  +49 30 8353 58394




NDNetGaming-development of a no-reference deep CNN for gaming video quality prediction
Zitatschlüssel utke2020a
Autor Utke, Markus and Zadtootaghaj, Saman and Schmidt, Steven and Bosse, Sebastian and Möller, Sebastian
Seiten 1–23
Jahr 2020
ISSN 1573-7721
DOI 10.1007/s11042-020-09144-6
Adresse Address of the Publisher and (NOT the conference)
Journal Multimedia Tools and Applications
Monat jul
Verlag Springer
Wie herausgegeben Fullpaper
Zusammenfassung Gaming video streaming services are growing rapidly due to new services such as passive video streaming of gaming content, e.g. Twitch.tv, as well as cloud gaming, e.g. Nvidia GeForce NOW and Google Stadia. In contrast to traditional video content, gaming content has special characteristics such as extremely high and special motion patterns, synthetic content and repetitive content, which poses new opportunities for the design of machine learning-based models to outperform the state-of-the-art video and image quality approaches for this special computer generated content. In this paper, we train a Convolutional Neural Network (CNN) based on an objective quality model, VMAF, as ground truth and fine-tuned it based on subjective image quality ratings. In addition, we propose a new temporal pooling method to predict gaming video quality based on frame-level predictions. Finally, the paper also describes how an appropriate CNN architecture can be chosen and how well the model performs on different contents. Our result shows that among four popular network architectures that we investigated, DenseNet performs best for image quality assessment based on the training dataset. By training the last 57 convolutional layers of DenseNet based on VMAF values, we obtained a high performance model to predict VMAF of distorted frames of video games with a Spearman’s Rank correlation (SRCC) of 0.945 and Root Mean Score Error (RMSE) of 7.07 on the image level, while achieving a higher performance on the video level leading to a SRCC of 0.967 and RMSE of 5.47 for the KUGVD dataset. Furthermore, we fine-tuned the model based on subjective quality ratings of images from gaming content which resulted in a SRCC of 0.93 and RMSE of 0.46 using one-hold-out cross validation. Finally, on the video level, using the proposed pooling method, the model achieves a very good performance indicated by a SRCC of 0.968 and RMSE of 0.30 for the used gaming video dataset.
Link zur Publikation Link zur Originalpublikation Download Bibtex Eintrag



Schnellnavigation zur Seite über Nummerneingabe