TU Berlin

Quality and Usability LabRafael Zequeira Jiménez

Inhalt des Dokuments

zur Navigation

Rafael Zequeira Jiménez

Lupe

Research Topics

  • Speech Quality Assessment in Crowdsourcing  

 

Research Group

Next Generation Crowdsourcing

 

Biography

Rafael Zequeira Jiménez received a degree as Telecommunication Engineer (equivalent to Master of Science) from the University of Granada, Spain in 2014.

From 2013 to 2014 he studied at Technische Universität Berlin within the Erasmus program. At this time, he worked on his Master Thesis entitled: “Secure multi protocol system based on a Resource Model for the IoT and M2M services”. In December 2013, Rafael joined the SNET department of the Deutsche Telekom Innovation Laboratories (T-Labs), where he worked during 10 months as a student research assistant in the TRESOR project. In which he focused on designing and implementing REST APIs to communicate different components.

In June 2015 Rafael joined the Quality and Usability Lab department lead by Prof. Dr.-Ing. Sebastian Möller, to work as Research Assistant in the “Next Generation Crowdsourcing” group, specifically in the Crowdee project. Since 2016, he works towards his PhD in the topic: “Analysis of Crowdsourcing Micro-Tasks for Speech Quality Assessment”.

 

 

Contact

Twitter: @zequeiraj

e-mail:

 

 

Address

Quality and Usability Lab
TU Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin, Germany

Tel:  +4930835358336​

Publications

Outliers Detection vs. Control Questions to Ensure Reliable Results in Crowdsourcing. A Speech Quality Assessment Case Study
Zitatschlüssel zequeirajimenez2018b
Autor Zequeira Jiménez, Rafael and Fernández Gallardo, Laura and Möller, Sebastian
Buchtitel Companion Proceedings of the The Web Conference 2018
Seiten 1127–1130
Jahr 2018
ISBN 978-1-4503-5640-4
Adresse Republic and Canton of Geneva, Switzerland
Monat apr
Verlag International World Wide Web Conferences Steering Committee
Serie WWW '18
Wie herausgegeben Fullpaper
Zusammenfassung Crowdsourcing provides an exceptional opportunity for the rapid collection of human input for data acquisition and labelling. This approach have been adopted in multiple domains and researchers are now able to reach a demographically diverse audience at low cost. However, it remains the question of whether the results are still valid and reliable. Previous work have introduced different mechanisms to ensure data reliability in crowdsourcing. This work examines to which extend, "trapping question" or "outliers detection" assure reliable results to the detriment of, overloading task content with stimuli that are not of interest for the researcher, or by discarding data points that might be the true opinion of a worker. To this end, a speech quality assessment study have been conducted in a web crowdsourcing platform, following the ITU-T Rec. P.800. Workers assessed the speech stimuli of the database 501 from the ITU-T Rec. P.863. We examine results' validity in terms of correlations to previous ratings collected in laboratory. Our outcomes shows that neither of the techniques under investigation improve results accuracy by itself, but a combination of both. Our goal is to provide empirical guidance for designing experiments in crowdsourcing while ensuring data reliability.
Link zur Originalpublikation Download Bibtex Eintrag

Navigation

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe