Page Content
to Navigation
Dr. Tim Polzehl
Crowdsourcing Technology
- High-quality data collection via crowdsourcing
- Data management and data services via crowdsourcing (clean, index, verify, tag, label, translate, summarize, join, etc. )
- Data synthesis und data generation via crowdsourcing
- Subjective influences and bias normalization in crowdsourcing
- Crowd-creation, crowd-voting, crowd-storming, crowd-testing applications
- Crowdsourcing service for machine learning and BI
- Crowdsourcing business and Business Logic
- Complex automated workflows: combining human and artificial intelligence
- Crowdsourcing with mobile devices
- Real-time crowdsourcing
- Skill-based crowdsourcing and verification of crowd-experts
Speech Technology
- Automatic user classification
- Automatic speaker characterization (age, gender, emotion, personality)
- Automatic speech recognition (ASR),
- Prosody and voice gesture recognition
- Prosodic voice print analysis, phonetic science
- App development with speech functionalities (Android, iOS)
Text Classification, Natural Language Processing (NLP)
- Sentiment Analysis
- Affective Analysis, Emotion
- Personality und Lifestyle Detection from Social-Networks (Twitter, FB, G+, etc.)
Machine Learning and Artificial Intelligence
- Automated user modelling
- Classification and prediction systems using linear and non-linear algorithms
- Feature selection and reduction
- Evaluation and verification methods
Running and Past Projects:
please click here.
Project Biography
Tim Polzehl studied Science of Communication at Berlin's Technical University. Combining linguistic knowledge with signal processing skills he focused on speech interpretation and automatic data- and metadata extraction. He gathered experience within the field of machine learning as exercised when recognizing human speech utterances and classifying emotional expression subliminal in speech, the latter of which became his M.A. thesis.
In 2008 Tim Polzehl started his position as PhD candidate in Telekom Innovation Laboratories (T-Labs) and the Quality and Usability Lab. He worked in both industrial and academic projects with focus on speech technology, App-Development, machine learning crowd sourcing solutions.
2011-2013 Tim was leading a R&D Project for Telekom Innovation Laboratories with Applications in the field of Intelligent Customer-Care Systems and Speech-Apps.
2012-2014 Tim was awarded with an BMBF funded Education program for future IT and Development Leadership involving SAP, Software AG, Scheer Group, Siemens, Holtzbrinck, Bosch, Datev and Deutsche Telekom AG, amongst highly ranked academic institution (Softwarecampus).
2014 Tim was awarded the PhD for his work on automatic prediction of personality attributes from speech.
Since 2014 Tim has been working as a Postdoc at the Quality and Usability chair of TU-Berlin. At the same time Tim is driving the start-up activity applying the earlier development of crowdsourcing solutions Crowdee.
Address:
Quality and Usability Labs
Technische Universität Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin
Tel.:+49 (30) 8353-58227Fax: +49 (30) 8353-58409mailto:tim.polzehl@qu.tu-berlin.de
Openings / Supervision
please refer to here.
Publications
Citation key | dimitrov2018a |
---|---|
Author | Dimitrov, Todor and Kramps, Oliver and Naroska, Edwin and Bolten, Tobias and Demmer, Julia and Ressel, Christian and Könen, Stefan and Polzehl, Tim and Voigt-Antons, Jan-Niklas and Matthies, Olaf and Habibi, Amir and Heutelbeck, Dominic and Mertens, Jana and Matip, Eva-Maria |
Title of Book | Zukunft der Pflege Tagungsband der 1. Clusterkonferenz 2018 |
Pages | 78–84 |
Year | 2018 |
ISBN | 978-3-8142-2367-4 |
Location | Oldenburg, Germany |
Address | Oldenburg, Germany |
Note | |
Publisher | BIS |
Series | Zukunft der Pflege |
How Published | Fullpaper |
Abstract | In diesem Beitrag wird die technische Umsetzung einer interaktiven Puppe beschrieben, die bei Hochaltrigen und Menschen mit Demenz zum Einsatz kommt. Das Hauptziel ist die Entlastung informell Pflegender, in dem die Puppe beruhigend auf die zu Pflegende Person einwirkt, sie aktiviert und Orientierung im Tagesablauf anbietet. Das System besteht aus der Roboterpuppe, einer zentralen Recheneinheit, einer Backendinfrastruktur und der Smartphone App für die Angehörigen. Die Puppe kann Sprache und Emotionen über die Gesichtsmimik wiedergeben. Außerdem ist sie in der Lage, Menschen im Raum mit dem Blick zu folgen. Rechenintensive Aufgaben wie Sprach- und Emotionserkennung, Kontexterkennung und -Management und Handlungsplanung finden auf der zentralen Recheneinheit statt. Dort werden aus Rohdaten (Sprache, Gesichtsbilder und Umgebungssensordaten) Kontexte inferiert und der Handlungsplanung bereitgestellt. Diese entscheidet welche vordefiniert Programme ausgeführt werden müssen, um die Puppe zu steuern (z. B. Person ansprechen, an Termine erinnern). Die erkannten Situationen und ausgeführten Aktionen werden im Backend gespeichert und den informell Pflegenden über die App bereitgestellt. |