direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Rafael Zequeira Jiménez


Research Topics

  • Speech Quality Assessment in Crowdsourcing  


Research Group

Next Generation Crowdsourcing



Rafael Zequeira Jiménez received a degree as Telecommunication Engineer (equivalent to Master of Science) from the University of Granada, Spain in 2014.

From 2013 to 2014 he studied at Technische Universität Berlin within the Erasmus program. At this time, he worked on his Master Thesis entitled: “Secure multi protocol system based on a Resource Model for the IoT and M2M services”. In December 2013, Rafael joined the SNET department of the Deutsche Telekom Innovation Laboratories (T-Labs), where he worked during 10 months as a student research assistant in the TRESOR project. In which he focused on designing and implementing REST APIs to communicate different components.

In June 2015 Rafael joined the Quality and Usability Lab department lead by Prof. Dr.-Ing. Sebastian Möller, to work as Research Assistant in the “Next Generation Crowdsourcing” group, specifically in the Crowdee project. Since 2016, he works towards his PhD in the topic: “Analysis of Crowdsourcing Micro-Tasks for Speech Quality Assessment”.




Twitter: @zequeiraj





Quality and Usability Lab
TU Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin, Germany

Tel:  +4930835358336​


Environmental Noise Recording as a Quality Control for Crowdsourcing Speech Quality Assessments
Zitatschlüssel zequeirajimenez2018a
Autor Zequeira Jiménez, Rafael and Fernández Gallardo, Laura and Möller, Sebastian
Buchtitel 44. Deutsche Jahrestagung für Akustik (DAGA)
Seiten 303–306
Jahr 2018
ISBN 978-3-939296-13-3
Monat mar
Verlag Deutsche Gesellschaft für Akustik DEGA e.V.
Wie herausgegeben Fullpaper
Zusammenfassung The Crowdsourcing (CS) paradigm offers small tasks to anonymous users on the Internet. Human-centered speech quality assessment studies have been traditionally conducted under controlled laboratory conditions. Nowadays, CS provides an exceptional opportunity to transfer such experiments to the internet and reach a wider and diverse audience. However, data from CS can be corrupted due to users' neglect and hence quality control mechanisms are required to ensure reliable outcomes. While previous works have presented trapping questions or majority voting to ensure good results, this work introduces user-environmental noise recording to discard unreliable users located in noisy places. To this end, a speech quality assessment study was conducted in the clickworker CS platform. The speech stimuli are taken from the database 501 from the ITU-T Rec. P.863 and the results are to be contrasted to the existing lab ratings. This work analyzes whether environmental noise recording can be used to identify unreliable workers. Furthermore, the effects of discarding users deemed untrustworthy on the correlation between the CS and the Lab results is studied. Our outcomes highlight the importance of controlling for users' background noises to ensure reliable results in speech quality assessments conducted via CS.
Link zur Publikation Download Bibtex Eintrag

Zusatzinformationen / Extras


Schnellnavigation zur Seite über Nummerneingabe