Quality and Usability

Crowdsourcing and Open Data

Core Team:

Partner:

Topics:

Micro-task Crowdsourcing provide a remarkable opportunity for academic and industry sectors by offering a high scale, on demand and low cost pool of a geographically distributed workforce for completing complex tasks that can be divided into a set of short and simple online tasks, such as annotations, data collection, or participation in an subjective test

Our focus is to investigate “quality” in every aspect of the crowdsourcing process from design of work-flows and integration of AI systems, to application domains.

Besides others, we work on building state-of-the-art methods for conducting valid, reliable, and reproducible Subjective Tests using Crowdsourcing for different media: speech, video, gaming, text.These methods can be used for evaluating output of AI models (like speech enhancement, denoising, translation, summarization, etc), codecs, or studying the trade-offs between influencing factors and perceived quality. Our group is actively participating in the standardization activities in ITU-T Study Group 12, leading and participating in different Working Items, including P.Crowd (speech and crowdsourcing), P.CrowdV (video and crowdsourcing), P.CrowdG (gaming and crowdsourcing), and P.CrowdCon (conversation test in crowdsourcing).

Our research can be categorized as the following:

  • Speech, Video, Gaming, Text Quality Assessment using Crowdsourcing approach
  • Quality Control Mechanisms (Data Reliability, Agreement)
  • Crowd and user biases, subjective normalization
  • High Quality Crowd-Workflow Design
  • Combination of Human Computation and AI
  • Crowd- and AI-based NLP (Translation, Summarization, Knowledge graph, Chatbot, and Dialog Flow) Workflows
  • Crowd assessments: Usability, UX, QoE
  • Open Data, and Open Science (open-science.berlin)

Start-Up:

  • Crowdee:  High Quality Large Scale Crowdsourcing for Studies and AI-related Data Acquisition: (Start-Up from QU TU Berlin)

Projects:

Past Projects:

  • CrowdMAQA (Motivation and Quality Control in Crowdsourcing)
  • AUNUMAP (Automated User Segmentation from Speech and Text for Market Research Applications)
  • Predicting the Perceived Quality of Audiovisual Speech (Perc Qual AVS)