TU Berlin

Quality and Usability LabRafael Zequeira Jiménez

Inhalt des Dokuments

zur Navigation

Rafael Zequeira Jiménez

Lupe

Research Topics

  • Speech Quality Assessment in Crowdsourcing  

 

Research Group

Next Generation Crowdsourcing

 

Biography

Rafael Zequeira Jiménez received a degree as Telecommunication Engineer (equivalent to Master of Science) from the University of Granada, Spain in 2014.

From 2013 to 2014 he studied at Technische Universität Berlin within the Erasmus program. At this time, he worked on his Master Thesis entitled: “Secure multi protocol system based on a Resource Model for the IoT and M2M services”. In December 2013, Rafael joined the SNET department of the Deutsche Telekom Innovation Laboratories (T-Labs), where he worked during 10 months as a student research assistant in the TRESOR project. In which he focused on designing and implementing REST APIs to communicate different components.

In June 2015 Rafael joined the Quality and Usability Lab department lead by Prof. Dr.-Ing. Sebastian Möller, to work as Research Assistant in the “Next Generation Crowdsourcing” group, specifically in the Crowdee project. Since 2016, he works towards his PhD in the topic: “Analysis of Crowdsourcing Micro-Tasks for Speech Quality Assessment”.

 

 

Contact

Twitter: @zequeiraj

e-mail:

 

 

Address

Quality and Usability Lab
TU Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin, Germany

Tel:  +4930835358336​

Publications

Influence of Number of Stimuli for Subjective Speech Quality Assessment in Crowdsourcing
Zitatschlüssel zequeirajimenez2018c
Autor Zequeira Jiménez, Rafael and Fernández Gallardo, Laura and Möller, Sebastian
Buchtitel 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX)
Seiten 1–6
Jahr 2018
ISSN 2472-7814
DOI 10.1109/QoMEX.2018.8463298
Monat may
Verlag IEEE
Wie herausgegeben Fullpaper
Zusammenfassung Nowadays, crowdsourcing provides an exceptional opportunity for conducting subjective user tests on the Internet with a demographically diverse audience. Previous work has pointed out that the offered tasks should be kept short in time, therefore, participants evaluate at once just a portion of the dataset. Aspects like users' workload and fatigue are important as they relate to a main question: how to optimize study design without compromising results quality by tiring the test participants? This work investigates the influence of the number of presented speech stimuli on the reliability of listeners' ratings in the context of subjective speech quality assessment. A crowdsourcing study have been conducted with 209 listeners that were asked to rate speech stimuli with respect to their overall quality. Participants were randomly assigned to one of three user groups, each of which was confronted with tasks consisting of a different number of stimuli: 10, 20, or 40. The results from the three groups are highly correlated to existing laboratory ratings, the group with the largest number of samples offering the highest correlation. However, participant retention decreased while the study completion time increased. Thus, it might be desirable to offer tasks with less speech stimuli sacrificing ratings' accuracy to some extent.
Download Bibtex Eintrag

Navigation

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe