direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Dr. Tim Polzehl

Crowdsourcing Technology

  • High-quality data collection via crowdsourcing
  • Data management and data services via crowdsourcing (clean, index, verify, tag, label, translate, summarize, join, etc. )
  • Data synthesis und data generation via crowdsourcing
  • Subjective influences and bias normalization in crowdsourcing
  • Crowd-creation, crowd-voting, crowd-storming, crowd-testing applications
  • Crowdsourcing service for machine learning and BI
  • Crowdsourcing business and Business Logic
  • Complex automated workflows: combining human and artificial intelligence
  • Crowdsourcing with mobile devices
  • Real-time crowdsourcing
  • Skill-based crowdsourcing and verification of crowd-experts

 

Speech Technology

  • Automatic user classification
  • Automatic speaker characterization (age, gender, emotion, personality) 
  • Automatic speech recognition (ASR),
  • Prosody and voice gesture recognition
  • Prosodic voice print analysis, phonetic science
  • App development with speech functionalities (Android, iOS)

 

Text Classification, Natural Language Processing (NLP)  

  • Sentiment Analysis
  • Affective Analysis, Emotion
  • Personality und Lifestyle Detection from Social-Networks (Twitter, FB, G+, etc.)

 

Machine Learning and Artificial Intelligence  

  • Automated user modelling
  • Classification and prediction systems using linear and non-linear algorithms
  • Feature selection and reduction
  • Evaluation and verification methods

 

Running and Past Projects:

please click here.

 

 


Project Biography 

Tim Polzehl studied Science of Communication at Berlin's Technical University. Combining linguistic knowledge with signal processing skills he focused on speech interpretation and automatic data- and metadata extraction. He gathered experience within the field of machine learning as exercised when recognizing human speech utterances and classifying emotional expression subliminal in speech, the latter of which became his M.A. thesis. 

In 2008 Tim Polzehl started his position as PhD candidate in Telekom Innovation Laboratories (T-Labs) and the Quality and Usability Lab. He worked in both industrial and academic projects with focus on speech technology, App-Development, machine learning crowd sourcing solutions.

2011-2013 Tim was leading a R&D Project for Telekom Innovation Laboratories with Applications in the field of Intelligent Customer-Care Systems and Speech-Apps.

2012-2014 Tim was awarded with an BMBF funded Education program for future IT and Development Leadership involving SAP, Software AG, Scheer Group, Siemens, Holtzbrinck, Bosch, Datev and Deutsche Telekom AG, amongst highly ranked academic institution (Softwarecampus).       

2014 Tim was awarded the PhD for his work on automatic prediction of personality attributes from speech.

Since 2014 Tim has been working as a Postdoc at the Quality and Usability chair of TU-Berlin. At the same time Tim is driving the start-up activity applying the earlier  development of crowdsourcing solutions Crowdee.

 

Address:

Quality and Usability Labs

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin

Tel.:+49 (30) 8353-58227Fax: +49 (30) 8353-58409




Openings / Supervision

please refer to here.

Publications

Best Practices for Crowd-based Evaluation of German Summarization: Comparing Crowd, Expert and Automatic Evaluation
Citation key iskender2020c
Author Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian
Title of Book Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
Pages 164–175
Year 2020
Location online
Address online
Month nov
Publisher Association for Computational Linguistics (ACL)
Series EMNLP | Eval4NLP
How Published Fullpaper
Abstract One of the main challenges in the development of summarization tools is summarization quality evaluation. On the one hand, the human assessment of summarization quality conducted by linguistic experts is slow, expensive, and still not a standardized procedure. On the other hand, the automatic assessment metrics are reported not to correlate high enough with human quality ratings. As a solution, we propose crowdsourcing as a fast, scalable, and cost-effective alternative to expert evaluations to assess the intrinsic and extrinsic quality of summarization by comparing crowd ratings with expert ratings and automatic metrics such as ROUGE, BLEU, or BertScore on a German summarization data set. Our results provide a basis for best practices for crowd-based summarization evaluation regarding major influential factors such as the best annotation aggregation method, the influence of readability and reading effort on summarization evaluation, and the optimal number of crowd workers to achieve comparable results to experts, especially when determining factors such as overall quality, grammaticality, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness.
Link to publication Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions