direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Dr. -Ing. Babak Naderi

Lupe

Research Interests:

  • Subjective quality assessment
  • Speech Quality Assessment in Crowdsourcing
  • Motivation, Workload, and Performance in Crowdsourcing
  • Statistical Modeling, field data and applied statistics
  • Speech Enhancement
  • Text Complexity and Simplification

Biography:

Babak Naderi has obtain his Dr.-Ing degree (PhD) on the basis of his thesis with a title of Motivation of Workers on Microtask Crowdsourcing Platforms in September 2017. Babak has Master's degree in Geodesy and Geoinformation Science form the Technical University Berlin with a thesis on "Monte Carlo Localization for Pedestrian Indoor Navigation Using a Map Aided Movement Model". He has also a Bachelor's degree in Software Engineering.

Since August 2012, Babak Naderi is working as a research scientist at the Quality and Usability Lab of  TU-Berlin.

2013-2015 Babak was awarded with an BMBF funded Education program for future IT and Development Leadership involving Bosch, Datev, Deutsche Telekom AG, Holtzbrinck, SAP, Scheer Group, Siemens, and Software AG  amongst highly ranked academic institution (Softwarecampus). He was taking part by leading CrowdMAQA project.

Within dissertation, Babak studies the motivation of crowdworkers in details. He has developed the Crowdwork Motivation Scale for measuring general motivation based on the Self-Determination Theory of Motivation. The scale has been validated within several studies. In addition, he has studied factors influencing the motivation, and influence of different motivation type on the quality of outcomes. Models for predicting task selection strategy of workers are developed, including models for automatically predicting expected workload associated to a task from its design, task acceptance and performance. 

Beside others research activities, Babak is actively working on the standardization of methods for speech quality assessment in crowdsourcing environment in the P.CROWD work program of Study Group 12 in ITU-T Standardization Sector.

Reviewed for WWW, CHI, ICASSP, CSCW, MMSys, PQS, HCOMP, ICWE, QoMEX, International Journal of Human-Computer Studies, Computer Networks, Behaviour & Information Technology, Quality and User Experience.

 

Selected talks:

  • "Motivation of Crowd Workers, does it matter?",Schloss Dagstuhl, Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments, November 2015.
  • "Motivation and Quality Assessment in Online Paid Crowdsourcing Micro-task Platforms",Schloss Dagstuhl, Crowdsourcing: From Theory to Practice and Long-Term Perspectives, September 2013.

 

Office Hours: On Appointment

 

Adresse:

Quality and Usability Lab

Technische Universität Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin

Tel.:+49 (30) 8353-54221
Fax: +49 (30) 8353-58409

babak.naderi[at]tu-berlin.de

Publications

Towards speech quality assessment using a crowdsourcing approach: evaluation of standardized methods
Citation key naderi2020e
Author Naderi, Babak and Jiménez, Rafael Zequeira and Hirth, Matthias and Möller, Sebastian and Metzger, Florian and Hoßfeld, Tobias
Pages 2
Year 2020
ISSN 2366-0139
DOI 10.1007/s41233-020-00042-1
Address Springer Nature
Journal Quality and User Experience
Volume 6
Number 1
Month nov
Note online,print
Publisher Springer
How Published Fullpaper
Abstract Subjective speech quality assessment has traditionally been carried out in laboratory environments under controlled conditions. With the advent of crowdsourcing platforms tasks, which need human intelligence, can be resolved by crowd workers over the Internet. Crowdsourcing also offers a new paradigm for speech quality assessment, promising higher ecological validity of the quality judgments at the expense of potentially lower reliability. This paper compares laboratory-based and crowdsourcing-based speech quality assessments in terms of comparability of results and efficiency. For this purpose, three pairs of listening-only tests have been carried out using three different crowdsourcing platforms and following the ITU-T Recommendation P.808. In each test, listeners judge the overall quality of the speech sample following the Absolute Category Rating procedure. We compare the results of the crowdsourcing approach with the results of standard laboratory tests performed according to the ITU-T Recommendation P.800. Results show that in most cases, both paradigms lead to comparable results. Notable differences are discussed with respect to their sources, and conclusions are drawn that establish practical guidelines for crowdsourcing-based speech quality assessment.
Link to publication Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe