direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Dr. Tim Polzehl

Crowdsourcing Technology

  • High-quality data collection via crowdsourcing
  • Data management and data services via crowdsourcing (clean, index, verify, tag, label, translate, summarize, join, etc. )
  • Data synthesis und data generation via crowdsourcing
  • Subjective influences and bias normalization in crowdsourcing
  • Crowd-creation, crowd-voting, crowd-storming, crowd-testing applications
  • Crowdsourcing service for machine learning and BI
  • Crowdsourcing business and Business Logic
  • Complex automated workflows: combining human and artificial intelligence
  • Crowdsourcing with mobile devices
  • Real-time crowdsourcing
  • Skill-based crowdsourcing and verification of crowd-experts

 

Speech Technology

  • Automatic user classification
  • Automatic speaker characterization (age, gender, emotion, personality) 
  • Automatic speech recognition (ASR),
  • Prosody and voice gesture recognition
  • Prosodic voice print analysis, phonetic science
  • App development with speech functionalities (Android, iOS)

 

Text Classification, Natural Language Processing (NLP)  

  • Sentiment Analysis
  • Affective Analysis, Emotion
  • Personality und Lifestyle Detection from Social-Networks (Twitter, FB, G+, etc.)

 

Machine Learning and Artificial Intelligence  

  • Automated user modelling
  • Classification and prediction systems using linear and non-linear algorithms
  • Feature selection and reduction
  • Evaluation and verification methods

 

Running and Past Projects:

please click here.

 

 


Project Biography 

Tim Polzehl studied Science of Communication at Berlin's Technical University. Combining linguistic knowledge with signal processing skills he focused on speech interpretation and automatic data- and metadata extraction. He gathered experience within the field of machine learning as exercised when recognizing human speech utterances and classifying emotional expression subliminal in speech, the latter of which became his M.A. thesis. 

In 2008 Tim Polzehl started his position as PhD candidate in Telekom Innovation Laboratories (T-Labs) and the Quality and Usability Lab. He worked in both industrial and academic projects with focus on speech technology, App-Development, machine learning crowd sourcing solutions.

2011-2013 Tim was leading a R&D Project for Telekom Innovation Laboratories with Applications in the field of Intelligent Customer-Care Systems and Speech-Apps.

2012-2014 Tim was awarded with an BMBF funded Education program for future IT and Development Leadership involving SAP, Software AG, Scheer Group, Siemens, Holtzbrinck, Bosch, Datev and Deutsche Telekom AG, amongst highly ranked academic institution (Softwarecampus).       

2014 Tim was awarded the PhD for his work on automatic prediction of personality attributes from speech.

Since 2014 Tim has been working as a Postdoc at the Quality and Usability chair of TU-Berlin. At the same time Tim is driving the start-up activity applying the earlier  development of crowdsourcing solutions Crowdee.

 

Address:

Quality and Usability Labs

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin

Tel.:+49 (30) 8353-58227Fax: +49 (30) 8353-58409




Openings / Supervision

please refer to here.

Publications

Crowdsourcing versus the laboratory: towards crowd-based linguistic text quality assessment of query-based extractive summarization
Citation key iskender2020a
Author Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian
Title of Book Proceedings of the Conference on Digital Curation Technologies (Qurator 2020)
Pages 1–16
Year 2020
Address Berlin, Germany
Month jan
Note online
Publisher CEUR
Series QURATOR
How Published Fullpaper
Abstract Curating text manually in order to improve the quality of automatic natural language processing tools can become very time consuming and expensive. Especially, in the case of query-based extractive online forum summarization, curating complex information spread along multiple posts from multiple forum members to create a short meta-summary that answers a given query is a very challenging task. To overcome this challenge, we explore the applicability of microtask crowdsourcing as a fast and cheap alternative for query-based extractive text summarization of online forum discussions. We measure the linguistic quality of crowd-based forum summarizations, which is usually conducted in a traditional laboratory environment with the help of experts, via comparative crowdsourcing and laboratory experiments. To our knowledge, no other study considered query-based extractive text summarization and summary quality evaluation as an application area of the microtask crowdsourcing. By conducting experiments both in crowdsourcing and laboratory environments, and comparing the results of linguistic quality judgments, we found out that microtask crowdsourcing shows high applicability for determining the factors overall quality, grammaticality, non-redundancy, referential clarity, focus, and structure & coherence. Further, our comparison of these findings with a preliminary and initial set of expert annotations suggest that the crowd assessments can reach comparable results to experts specifically when determining factors such as overall quality and structure & coherence mean values. Eventually, preliminary analyses reveal a high correlation between the crowd and expert ratings when assessing low-quality summaries.
Link to publication Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe