Dr. -Ing. Babak Naderi
- © Q&U
- Subjective quality assessment
- Speech Quality Assessment in Crowdsourcing
- Motivation, Workload, and Performance in Crowdsourcing
- Statistical Modeling, field data and applied statistics
- Speech Enhancement
- Text Complexity and Simplification
Babak Naderi has obtain his Dr.-Ing degree (PhD) on the basis of his thesis with a title of Motivation of Workers on Microtask Crowdsourcing Platforms  in September 2017. Babak has Master's degree in Geodesy and Geoinformation Science form the Technical University Berlin with a thesis on "Monte Carlo Localization for Pedestrian Indoor Navigation Using a Map Aided Movement Model" . He has also a Bachelor's degree in Software Engineering.
Since August 2012, Babak Naderi is working as a research scientist at the Quality and Usability Lab of TU-Berlin.
2013-2015 Babak was awarded with an BMBF funded Education program for future IT and Development Leadership involving Bosch, Datev, Deutsche Telekom AG, Holtzbrinck, SAP, Scheer Group, Siemens, and Software AG amongst highly ranked academic institution (Softwarecampus ). He was taking part by leading CrowdMAQA project.
Within dissertation, Babak studies the motivation of crowdworkers in details. He has developed the Crowdwork Motivation Scale  for measuring general motivation based on the Self-Determination Theory of Motivation. The scale has been validated within several studies. In addition, he has studied factors influencing the motivation, and influence of different motivation type on the quality of outcomes. Models for predicting task selection strategy of workers are developed, including models for automatically predicting expected workload associated to a task from its design, task acceptance and performance.
Beside others research activities, Babak is actively working on the standardization of methods for speech quality assessment in crowdsourcing environment in the P.CROWD work program of Study Group 12 in ITU-T Standardization Sector .
Reviewed for WWW, CHI, ICASSP, CSCW, MMSys, PQS, HCOMP, ICWE, QoMEX, International Journal of Human-Computer Studies, Computer Networks, Behaviour & Information Technology, Quality and User Experience.
- "Motivation of Crowd Workers, does it matter?",Schloss Dagstuhl, Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments, November 2015.
- "Motivation and Quality Assessment in Online Paid Crowdsourcing Micro-task Platforms",Schloss Dagstuhl, Crowdsourcing: From Theory to Practice and Long-Term Perspectives, September 2013.
Office Hours: On Appointment
Quality and Usability Lab
Technische Universität Berlin
Tel.:+49 (30) 8353-54221
Fax: +49 (30) 8353-58409
|Author||Zequeira Jiménez, Rafael and Llagostera, Anna and Naderi, Babak and Möller, Sebastian and Berger, Jens|
|Title of Book||2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX)|
|Abstract||Crowdsourcing has become a convenient instrument for addressing subjective user studies to a large amounts of users. Data from crowdsourcing can be corrupted due to users' neglect, and different mechanisms has been proposed to address the users' reliability and to ensure valid experiments' results. Users that are consistent in their answers or present a high intra-rater reliability score, are desired for subjective studies. This work investigates the relationship between the intra-rater reliability and the user performance in the context of a speech quality assessment task. To this end, a crowdsourcing study has been conducted in which users were requested to rate speech stimuli with respect to their overall quality. Ratings were collected on a 5-point scale in accordance with the ITU-T Rec. P.808. The speech stimuli were taken from the database ITU-T Rec. P.501 Annex D, and the results are to be contrasted with ratings collected in a laboratory experiment. Furthermore, a model as a function of intra-rater reliability, root-mean-squared-deviation between the listeners ratings and age, has been built to predict the listener performance. Such a model is intended to provide a measure of how valid the crowdsourcing results are, when there is no laboratory results to compare to.|