Page Content
Topics:
Besides others, we work on building state-of-the-art methods for conducting valid, reliable, and reproducible Subjective Tests using Crowdsourcing for different media: speech, video, gaming, text. Our group is actively participating in the standardization activities in ITU-T Study Group 12, leading and participating in different Working Items, including P.Crowd (speech and crowdsourcing), P.CrowdV (video and crowdsourcing), P.CrowdG (gaming and crowdsourcing).
Our research can be categorized as the following:
Platform
- Fundamental Aspects of Crowdsourcing Platforms
- High Quality Crowd-Workflow Design
- Combination of Human Computation and AI
- Open Data, and Open Science (open-science.berlin)
Workers
- Motivation of Workers
- Gamification in Volunteer Crowdsourcing
- Personalized/adaptive Design to Increase Performance
- Quality Control Mechanisms (Data Reliability, Agreement)
Application of Crowdsourcing
- Speech Quality Assessment using Crowdsourcing approach
- Video Quality Assessment using Crowdsourcing approach
- Gaming QoE Assessment using Crowdsourcing
- Text Quality and Complexity Assessment using Crowdsourcing
- Crowd- and AI-based Hybrid Workflows (error correction and performance boost by adding crowd-services to support AI)
- Crowd- and AI-based Translation Workflows
- Crowd- and AI-based Text Summarization Workflows
- Crowd- and AI-based Knowledge Graph and Chatbot Supporting Workflows
- Crowd- and AI-based Chatbot/ Dialog Flow Supervision and in-time Correction
- Crowd- and AI-based Information Learning (autonomous knowledge base updates)
- Internal Crowdsourcing (employee sourcing)
- Text Simplification and Text Complexity: einfaches-wiki.de
Our research extends to the following areas:
Building on Crowdsourcing
- Mobile crowdsourcing (in the field)
- Crowd assessments: Usability, UX, QoE
- Privacy and confidentiality in crowdsourcing
- Mobile street application, urban mobility, city guarding
- Data collection in the field: crowd as (continuous) sensors
- Data management (clean, index, verify, tag, label, translate, etc. )
Improving Crowdsourcing
- Real-time interaction, human computation as a service, (HuaaS)
- Privacy and security in crowdsourcing
- Motivation in crowdsourcing, gamification
- Quality control (pattern recognition, cheater detection, anomaly)
- Automatic user segmentation (clustering)
- Training, E-learning and building expert-crowds
- Task complexity modeling
- Crowd and user biases, subjective normalization
- Scalable Crowdsourcing, Robustness, Reliability in Engineering
- Quality in Crowdsourcing (quality of opinion, audio/video, reliability)
Start-Up:
- Crowdee: High Quality Large Scale Crowdsourcing for Studies and AI-related Data Acquisition: (Start-Up from QU TU Berlin)
Projects:
- SMESS - Towards a Standardized Methodology for Evaluating the Quality of Speech Services using Crowdsourcing
- RUBYDemenz - Robot mit Begleitung (BMBF)
- Automated Chatlog Analysis for Self-Learning NLU and Dialog Update in Customer Support Domain (DFKI)
- BOP - Berlin Open Science Platform for the Curation of Research Data (TU Berlin)
- DEKA - Design und Entwicklung einer kollaborativen digitalen Arbeitsplattform für die Digitalisierung von Innovationsprozessen (BMBF)
Past Projects:
- CrowdMAQA (Motivation and Quality Control in Crowdsourcing)
- AUNUMAP (Automated User Segmentation from Speech and Text for Market Research Applications)
- Vocalytics & SWYM (Fully Automated User Characterization and Personality Estmation)
- Speaker Recognition and Speaker Characterization through different Communication Channels
- Affect-based Indexing
- Anomaly Detection and Early Warning Systems
- Predicting the Perceived Quality of Audiovisual Speech (Perc Qual AVS)
Zusatzinformationen / Extras
Quick Access:
Schnellnavigation zur Seite über Nummerneingabe