direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Dr. -Ing. Babak Naderi

Lupe

Research Interests:

  • Subjective quality assessment
  • Speech Quality Assessment in Crowdsourcing
  • Motivation, Workload, and Performance in Crowdsourcing
  • Statistical Modeling, field data and applied statistics
  • Speech Enhancement
  • Text Complexity and Simplification

Biography:

Babak Naderi has obtain his Dr.-Ing degree (PhD) on the basis of his thesis with a title of Motivation of Workers on Microtask Crowdsourcing Platforms in September 2017. Babak has Master's degree in Geodesy and Geoinformation Science form the Technical University Berlin with a thesis on "Monte Carlo Localization for Pedestrian Indoor Navigation Using a Map Aided Movement Model". He has also a Bachelor's degree in Software Engineering.

Since August 2012, Babak Naderi is working as a research scientist at the Quality and Usability Lab of  TU-Berlin.

2013-2015 Babak was awarded with an BMBF funded Education program for future IT and Development Leadership involving Bosch, Datev, Deutsche Telekom AG, Holtzbrinck, SAP, Scheer Group, Siemens, and Software AG  amongst highly ranked academic institution (Softwarecampus). He was taking part by leading CrowdMAQA project.

Within dissertation, Babak studies the motivation of crowdworkers in details. He has developed the Crowdwork Motivation Scale for measuring general motivation based on the Self-Determination Theory of Motivation. The scale has been validated within several studies. In addition, he has studied factors influencing the motivation, and influence of different motivation type on the quality of outcomes. Models for predicting task selection strategy of workers are developed, including models for automatically predicting expected workload associated to a task from its design, task acceptance and performance. 

Beside others research activities, Babak is actively working on the standardization of methods for speech quality assessment in crowdsourcing environment in the P.CROWD work program of Study Group 12 in ITU-T Standardization Sector.

Reviewed for WWW, CHI, ICASSP, CSCW, MMSys, PQS, HCOMP, ICWE, QoMEX, International Journal of Human-Computer Studies, Computer Networks, Behaviour & Information Technology, Quality and User Experience.

 

Selected talks:

  • "Motivation of Crowd Workers, does it matter?",Schloss Dagstuhl, Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments, November 2015.
  • "Motivation and Quality Assessment in Online Paid Crowdsourcing Micro-task Platforms",Schloss Dagstuhl, Crowdsourcing: From Theory to Practice and Long-Term Perspectives, September 2013.

 

Office Hours: On Appointment

 

Adresse:

Quality and Usability Lab

Technische Universität Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin

Tel.:+49 (30) 8353-54221
Fax: +49 (30) 8353-58409

babak.naderi[at]tu-berlin.de

Publications

Best Practices and Recommendations for Crowdsourced QoE - Lessons learned from the Qualinet Task Force Crowdsourcing
Zitatschlüssel hossfeld2014a
Autor Hoßfeld, Tobias and Hirth, Matthias and Redi, Judith and Mazza, Filippo and Korshunov, Pavel and Naderi, Babak and Seufert, Michael and Gardlo, Bruno and Egger, Sebastian and Keimel, Christian
Jahr 2014
Nummer 1.0
Monat oct
Notiz HAL Id: hal-01078761
Wie herausgegeben full
Institution European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003 Qualinet)
Zusammenfassung Crowdsourcing is a popular approach that outsources tasks via the Internet to a large number of users. Commercial crowdsourcing platforms provide a global pool of users employed for perform-ing short and simple online tasks. For quality assessment of multimedia services and applications, crowdsourcing enables new possibilities by moving the subjective test into the crowd resulting in larger diversity of the test subjects, faster turnover of test campaigns, and reduced costs due to low reimbursement costs of the participants. Further, crowdsourcing allows easily addressing additional features like real-life environments. Crowdsourced quality assessment however is not a straight-forward implementation of existing subjective testing methodologies in an Internet-based environment. Additional challenges and differences to lab studies occur, in conceptual, technical, and motivational areas [9, 25, 26]. For example, the test contents need to be transmitted to the user over the Internet; test users may have low resolution screens influencing the user experience; also users may not understand the test or do not execute the test carefully resulting in unreliable data. This white paper summarizes the recommendations and best practices for crowdsourced qual-ity assessment of multimedia applications from the Qualinet Task Force on "Crowdsourcing". The European Network on Quality of Experience in Multimedia Systems and Services Qualinet (COST Action IC 1003, see www.qualinet.eu) established this task force in 2012. Since then it has grown to more then 30 members. The recommendation paper resulted from the experience in designing, implementing, and conducting crowdsourcing experiments as well as the analysis of the crowdsourced user ratings and context data. For understanding the impact of the crowdsourcing environment on QoE assessment and to derive a methodology and setup for crowdsourced QoE assessment, data from traditional lab experiments were compared with results from crowdsourc-ing experiments. Within the crowdsourcing task force, several different application domains and scientific questions were considered, among others: * video and image quality in general, * QoE for HTTP streaming [31, 32] and HTTP adaptive streaming [19, 30], * selfie portrait images perception in a recruitment context [10], * privacy in HDR images and video [39, 20, 36], * compression of HDR images [37] [38], * evaluation of 3D video [38], * image recognizability and aesthetic appeal [12, 13], * multidimensional modeling of web QoE [14], * QoE factors of cloud storage services [21], * enabling eye tracking experiments using web technologies [41]. From a crowdsourcing perspective, the following mechanisms and approaches were investigated which are relevant to understand for crowdsourced quality assessment.
Link zur Publikation Link zur Originalpublikation Download Bibtex Eintrag

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe