TU Berlin

Quality and Usability LabRafael Zequeira Jiménez

Inhalt des Dokuments

zur Navigation

Rafael Zequeira Jiménez

Lupe

Research Topics

  • Speech Quality Assessment in Crowdsourcing  

 

Research Group

Next Generation Crowdsourcing

 

Biography

Rafael Zequeira Jiménez received a degree as Telecommunication Engineer (equivalent to Master of Science) from the University of Granada, Spain in 2014.

From 2013 to 2014 he studied at Technische Universität Berlin within the Erasmus program. At this time, he worked on his Master Thesis entitled: “Secure multi protocol system based on a Resource Model for the IoT and M2M services”. In December 2013, Rafael joined the SNET department of the Deutsche Telekom Innovation Laboratories (T-Labs), where he worked during 10 months as a student research assistant in the TRESOR project. In which he focused on designing and implementing REST APIs to communicate different components.

In June 2015 Rafael joined the Quality and Usability Lab department lead by Prof. Dr.-Ing. Sebastian Möller, to work as Research Assistant in the “Next Generation Crowdsourcing” group, specifically in the Crowdee project. Since 2016, he works towards his PhD in the topic: “Analysis of Crowdsourcing Micro-Tasks for Speech Quality Assessment”.

 

 

Contact

Twitter: @zequeiraj

e-mail:

 

 

Address

Quality and Usability Lab
TU Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin, Germany

Tel:  +4930835358336​

Publications

Modeling Worker Performance Based on Intra-rater Reliability in Crowdsourcing : A Case Study of Speech Quality Assessment
Zitatschlüssel zequeirajimenez2019a
Autor Zequeira Jiménez, Rafael and Llagostera, Anna and Naderi, Babak and Möller, Sebastian and Berger, Jens
Buchtitel 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX)
Seiten 1–6
Jahr 2019
ISSN 2372-7179
DOI 10.1109/QoMEX.2019.8743148
Monat jun
Verlag IEEE
Serie QoMEX 2019
Wie herausgegeben Fullpaper
Zusammenfassung Crowdsourcing has become a convenient instrument for addressing subjective user studies to a large amounts of users. Data from crowdsourcing can be corrupted due to users' neglect, and different mechanisms has been proposed to address the users' reliability and to ensure valid experiments' results. Users that are consistent in their answers or present a high intra-rater reliability score, are desired for subjective studies. This work investigates the relationship between the intra-rater reliability and the user performance in the context of a speech quality assessment task. To this end, a crowdsourcing study has been conducted in which users were requested to rate speech stimuli with respect to their overall quality. Ratings were collected on a 5-point scale in accordance with the ITU-T Rec. P.808. The speech stimuli were taken from the database ITU-T Rec. P.501 Annex D, and the results are to be contrasted with ratings collected in a laboratory experiment. Furthermore, a model as a function of intra-rater reliability, root-mean-squared-deviation between the listeners ratings and age, has been built to predict the listener performance. Such a model is intended to provide a measure of how valid the crowdsourcing results are, when there is no laboratory results to compare to.
Download Bibtex Eintrag

Navigation

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe