TU Berlin

Quality and Usability LabTim Polzehl

Page Content

to Navigation

Dr. Tim Polzehl

Crowdsourcing Technology

  • High-quality data collection via crowdsourcing
  • Data management and data services via crowdsourcing (clean, index, verify, tag, label, translate, summarize, join, etc. )
  • Data synthesis und data generation via crowdsourcing
  • Subjective influences and bias normalization in crowdsourcing
  • Crowd-creation, crowd-voting, crowd-storming, crowd-testing applications
  • Crowdsourcing service for machine learning and BI
  • Crowdsourcing business and Business Logic
  • Complex automated workflows: combining human and artificial intelligence
  • Crowdsourcing with mobile devices
  • Real-time crowdsourcing
  • Skill-based crowdsourcing and verification of crowd-experts


Speech Technology

  • Automatic user classification
  • Automatic speaker characterization (age, gender, emotion, personality) 
  • Automatic speech recognition (ASR),
  • Prosody and voice gesture recognition
  • Prosodic voice print analysis, phonetic science
  • App development with speech functionalities (Android, iOS)


Text Classification, Natural Language Processing (NLP)  

  • Sentiment Analysis
  • Affective Analysis, Emotion
  • Personality und Lifestyle Detection from Social-Networks (Twitter, FB, G+, etc.)


Machine Learning and Artificial Intelligence  

  • Automated user modelling
  • Classification and prediction systems using linear and non-linear algorithms
  • Feature selection and reduction
  • Evaluation and verification methods


Running and Past Projects:

please click here.



Project Biography 

Tim Polzehl studied Science of Communication at Berlin's Technical University. Combining linguistic knowledge with signal processing skills he focused on speech interpretation and automatic data- and metadata extraction. He gathered experience within the field of machine learning as exercised when recognizing human speech utterances and classifying emotional expression subliminal in speech, the latter of which became his M.A. thesis. 

In 2008 Tim Polzehl started his position as PhD candidate in Telekom Innovation Laboratories (T-Labs) and the Quality and Usability Lab. He worked in both industrial and academic projects with focus on speech technology, App-Development, machine learning crowd sourcing solutions.

2011-2013 Tim was leading a R&D Project for Telekom Innovation Laboratories with Applications in the field of Intelligent Customer-Care Systems and Speech-Apps.

2012-2014 Tim was awarded with an BMBF funded Education program for future IT and Development Leadership involving SAP, Software AG, Scheer Group, Siemens, Holtzbrinck, Bosch, Datev and Deutsche Telekom AG, amongst highly ranked academic institution (Softwarecampus).       

2014 Tim was awarded the PhD for his work on automatic prediction of personality attributes from speech.

Since 2014 Tim has been working as a Postdoc at the Quality and Usability chair of TU-Berlin. At the same time Tim is driving the start-up activity applying the earlier  development of crowdsourcing solutions Crowdee.



Quality and Usability Labs

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin

Tel.:+49 (30) 8353-58227Fax: +49 (30) 8353-58409

Openings / Supervision

please refer to here.



Barz, Michael and Büyükdemircioglu, Neslihan and Prasad Surya, Rikhu and Polzehl, Tim and Sonntag, Daniel (2018). Device-Type Influence in Crowd-based Natural Language Translation Tasks. Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement (SAD) in Crowdsourcing 2018, and the 1st Workshop CrowdBias'18: Disentangling the Relation Between Crowdsourcing and Bias Management, 93–97.

Link to publication Link to original publication

Barz, Michael and Polzehl, Tim and Sonntag, Daniel (2018). Towards Hybrid Human-Machine Translation Services. EasyChair Preprint no. 333

Link to original publication

Black, Alan and Bunnell, H Timothy and Dou, Ying and Muthukumar, Prasanna Kumar and Perry, Daniel and Polzehl, Tim and Prahallad, Kishore and Vaughn, Callie and Steidl, S. (2012). ARTICULATORY FEATURES FOR EXPRESSIVE SPEECH SYNTHESIS. In Proc. ICASSP 2012. IEEE.

Link to original publication

Burkhardt, Felix and Ballegooy, Markus van and Engelbrecht, Klaus-Peter and Polzehl, Tim and Stegmann, Joachim (2009). Emotion Detection in Dialog Systems: Applications, Strategies and Challenges. Proc. of International Conference on Affective Computing and Intelligent Interaction (ACII 2009). IEEE.

Link to original publication

Burkhardt, Felix and Polzehl, Tim and Stegmann, Joachim and Metze, Florian and Huber, Richard (2009). Detecting Real Life Anger. Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2009). IEEE, 4761–4764.


Dimitrov, Todor and Kramps, Oliver and Naroska, Edwin and Bolten, Tobias and Demmer, Julia and Ressel, Christian and Könen, Stefan and Polzehl, Tim and Voigt-Antons, Jan-Niklas and Matthies, Olaf and Habibi, Amir and Heutelbeck, Dominic and Mertens, Jana and Matip, Eva-Maria (2018). „OurPuppet“ – Entwicklung einer Mensch-Technik-Interaktion für die Unterstützung informell Pflegender. Zukunft der Pflege Tagungsband der 1. Clusterkonferenz 2018. BIS, 78–84.

Link to original publication


Hinterleitner, Florian and Möller, Sebastian and Polzehl, Tim and Falk, Tiago H. (2010). Comparison of Approaches for Instrumentally Predicting the Quality of Text-to-Speech Systems: Data from Blizzard Challenges 2008 and 2009. Proceedings of the Blizzard Challenge Workshop. International Speech Communication Association (ISCA), 1–7.

Link to publication


Iskender, Neslihan and Gabryszak, Aleksandra and Polzehl, Tim and Hennig, Leonhard and Möller, Sebastian (2019). A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries. 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 1–3.

Link to publication

Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian (2020). Crowdsourcing versus the laboratory: towards crowd-based linguistic text quality assessment of query-based extractive summarization. Proceedings of the Conference on Digital Curation Technologies (Qurator 2020). CEUR, 1–16.

Link to publication Link to original publication

Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian (2020). Towards a Reliable and Robust Methodology for Crowd-Based Subjective Quality Assessment of Query-Based Extractive Text Summarization. Proceedings of The 12th Language Resources and Evaluation Conference. European Language Resources Association (ELRA), 245–253.

Link to publication Link to original publication

Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian (2020). Best Practices for Crowd-based Evaluation of German Summarization: Comparing Crowd, Expert and Automatic Evaluation. Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems. Association for Computational Linguistics (ACL), 164–175.

Link to publication Link to original publication

Iskender, Neslihan and Polzehl, Tim (2021). An Empirical Analysis of an Internal Crowdsourcing Platform: IT Implications for Improving Employee Participation. Internal Crowdsourcing in Companies: Theoretical Foundations and Practical Applications. Springer International Publishing, 103–134.

Link to publication Link to original publication

Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian (2021). Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead. Proceedings of the Workshop on Human Evaluation of NLP Systems. Association for Computational Linguistics, 86–96.

Link to publication Link to original publication


Quick Access

Schnellnavigation zur Seite über Nummerneingabe