Dr. Tim Polzehl
- High-quality data collection via crowdsourcing
- Data management and data services via crowdsourcing (clean, index, verify, tag, label, translate, summarize, join, etc. )
- Data synthesis und data generation via crowdsourcing
- Subjective influences and bias normalization in crowdsourcing
- Crowd-creation, crowd-voting, crowd-storming, crowd-testing applications
- Crowdsourcing service for machine learning and BI
- Crowdsourcing business and Business Logic
- Complex automated workflows: combining human and artificial intelligence
- Crowdsourcing with mobile devices
- Real-time crowdsourcing
- Skill-based crowdsourcing and verification of crowd-experts
- Automatic user classification
- Automatic speaker characterization (age, gender, emotion, personality)
- Automatic speech recognition (ASR),
- Prosody and voice gesture recognition
- Prosodic voice print analysis, phonetic science
- App development with speech functionalities (Android, iOS)
Text Classification, Natural Language Processing (NLP)
- Sentiment Analysis
- Affective Analysis, Emotion
- Personality und Lifestyle Detection from Social-Networks (Twitter, FB, G+, etc.)
Machine Learning and Artificial Intelligence
- Automated user modelling
- Classification and prediction systems using linear and non-linear algorithms
- Feature selection and reduction
- Evaluation and verification methods
Running and Past Projects:
please click here.
Tim Polzehl studied Science of Communication at Berlin's Technical University. Combining linguistic knowledge with signal processing skills he focused on speech interpretation and automatic data- and metadata extraction. He gathered experience within the field of machine learning as exercised when recognizing human speech utterances and classifying emotional expression subliminal in speech, the latter of which became his M.A. thesis.
In 2008 Tim Polzehl started his position as PhD candidate in Telekom Innovation Laboratories (T-Labs) and the Quality and Usability Lab. He worked in both industrial and academic projects with focus on speech technology, App-Development, machine learning crowd sourcing solutions.
2011-2013 Tim was leading a R&D Project for Telekom Innovation Laboratories with Applications in the field of Intelligent Customer-Care Systems and Speech-Apps .
2012-2014 Tim was awarded with an BMBF funded Education program for future IT and Development Leadership involving SAP, Software AG, Scheer Group, Siemens, Holtzbrinck, Bosch, Datev and Deutsche Telekom AG, amongst highly ranked academic institution (Softwarecampus ).
2014 Tim was awarded the PhD for his work on automatic prediction of personality attributes from speech.
Since 2014 Tim has been working as a Postdoc at the Quality and Usability chair of TU-Berlin. At the same time Tim is driving the start-up activity applying the earlier development of crowdsourcing solutions Crowdee .
Quality and Usability Labs
Technische Universität Berlin
Tel.:+49 (30) 8353-58227Fax: +49 (30) 8353-58409mailto:firstname.lastname@example.org 
|Author||Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian|
|Title of Book||Proceedings of The 12th Language Resources and Evaluation Conference|
|Publisher||European Language Resources Association (ELRA)|
|Abstract||The intrinsic and extrinsic quality evaluation is an essential part of the summary evaluation methodology usually conducted in a traditional controlled laboratory environment. However, processing large text corpora using these methods reveals expensive from both the organizational and the financial perspective. For the first time, and as a fast, scalable, and cost-effective alternative, we propose micro-task crowdsourcing to evaluate both the intrinsic and extrinsic quality of query-based extractive text summaries. To investigate the appropriateness of crowdsourcing for this task, we conduct intensive comparative crowdsourcing and laboratory experiments, evaluating nine extrinsic and intrinsic quality measures on 5-point MOS scales. Correlating results of crowd and laboratory ratings reveals high applicability of crowdsourcing for the factors overall quality, grammaticality, non-redundancy, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness. Further, we investigate the effect of the number of repetitions of assessments on the robustness of mean opinion score of crowd ratings, measured against the increase of correlation coefficients between crowd and laboratory. Our results suggest that the optimal number of repetitions in crowdsourcing setups, in which any additional repetitions do no longer cause an adequate increase of overall correlation coefficients, lies between seven and nine for intrinsic and extrinsic quality factors.|