Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Lange is active.

Publication


Featured researches published by Patrick Lange.


Archive | 2017

Assembling the Jigsaw: How Multiple Open Standards Are Synergistically Combined in the HALEF Multimodal Dialog System

Vikram Ramanarayanan; David Suendermann-Oeft; Patrick Lange; Robert Mundkowsky; Alexei V. Ivanov; Zhou Yu; Yao Qian; Keelan Evanini

As dialog systems become increasingly multimodal and distributed in nature with advances in technology and computing power, they become that much more complicated to design and implement. However, open industry and W3C standards provide a silver lining here, allowing the distributed design of different components that are nonetheless compliant with each other. In this chapter we examine how an open-source, modular, multimodal dialog system—HALEF—can be seamlessly assembled, much like a jigsaw puzzle, by putting together multiple distributed components that are compliant with the W3C recommendations or other open industry standards. We highlight the specific standards that HALEF currently uses along with a perspective on other useful standards that could be included in the future. HALEF has an open codebase to encourage progressive community contribution and a common standard testbed for multimodal dialog system development and benchmarking.


IWSDS | 2017

Multimodal HALEF: An Open-Source Modular Web-Based Multimodal Dialog Framework

Zhou Yu; Vikram Ramanarayanan; Robert Mundkowsky; Patrick Lange; Alexei V. Ivanov; Alan W. Black; David Suendermann-Oeft

We present an open-source web-based multimodal dialog framework, “Multimodal HALEF”, that integrates video conferencing and telephony abilities into the existing HALEF cloud-based dialog framework via the FreeSWITCH video telephony server. Due to its distributed and cloud-based architecture, Multimodal HALEF allows researchers to collect video and speech data from participants interacting with the dialog system outside of traditional lab settings, therefore largely reducing cost and labor incurred during the traditional audio-visual data collection process. The framework is equipped with a set of tools including a web-based user survey template, a speech transcription, an annotation and rating portal, a web visual processing server that performs head tracking, and a database that logs full-call audio and video recordings as well as other call-specific information. We present observations from an initial data collection based on an job interview application. Finally we report on some future plans for development of the framework.


IWSDS | 2019

An Open-Source Dialog System with Real-Time Engagement Tracking for Job Interview Training Applications

Zhou Yu; Vikram Ramanarayanan; Patrick Lange; David Suendermann-Oeft

In complex conversation tasks, people react to their interlocutor’s state, such as uncertainty and engagement to improve conversation effectiveness Forbes-Riley and Litman (Adapting to student uncertainty improves tutoring dialogues, pp 33–40, 2009 [2]). If a conversational system reacts to a user’s state, would that lead to a better conversation experience? To test this hypothesis, we designed and implemented a dialog system that tracks and reacts to a user’s state, such as engagement, in real time. We designed and implemented a conversational job interview task based on the proposed framework. The system acts as an interviewer and reacts to user’s disengagement in real-time with positive feedback strategies designed to re-engage the user in the job interview process. Experiments suggest that users speak more while interacting with the engagement-coordinated version of the system as compared to a non-coordinated version. Users also reported the former system as being more engaging and providing a better user experience.


international conference on multimodal interfaces | 2017

A modular, multimodal open-source virtual interviewer dialog agent

Kirby Cofino; Vikram Ramanarayanan; Patrick Lange; David Pautler; David Suendermann-Oeft; Keelan Evanini

We present an open-source multimodal dialog system equipped with a virtual human avatar interlocutor. The agent, rigged in Blender and developed in Unity with WebGL support, interfaces with the HALEF open-source cloud-based standard-compliant dialog framework. To demonstrate the capabilities of the system, we designed and implemented a conversational job interview scenario where the avatar plays the role of an interviewer and responds to user input in real-time to provide an immersive user experience.


annual meeting of the special interest group on discourse and dialogue | 2016

LVCSR System on a Hybrid GPU-CPU Embedded Platform for Real-Time Dialog Applications

Alexei V. Ivanov; Patrick Lange; David Suendermann-Oeft

We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.


Archive | 2015

Evaluation of Freely Available Speech Synthesis Voices for Halef

Martin Mory; Patrick Lange; Tarek Mehrez; David Suendermann-Oeft

We recently equipped the open-source spoken dialog system (SDS) Halef with the speech synthesizer Festival which supports both unit selection and HMM-based voices. Inspired by the most recent Blizzard Challenge, the largest international speech synthesis competition, we sought to find which of the freely available voices in Festival and those of the strongest competitor Mary are promising candidates for operational use in Halef. After conducting a subjective evaluation involving 36 participants, we found that Festival was clearly outperformed by Mary and that unit selection voices performed en par, if not better, than HMM-based ones.


ETS Research Report Series | 2016

Bootstrapping Development of a Cloud‐Based Spoken Dialog System in the Educational Domain From Scratch Using Crowdsourced Data

Vikram Ramanarayanan; David Suendermann-Oeft; Patrick Lange; Alexei V. Ivanov; Keelan Evanini; Zhou Yu; Eugene Tsuprun; Yao Qian


conference of the international speech communication association | 2017

Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions.

Vikram Ramanarayanan; Patrick Lange; Keelan Evanini; Hillary Molloy; David Suendermann-Oeft


ETS Research Report Series | 2017

Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs

Vikram Ramanarayanan; Patrick Lange; Keelan Evanini; Hillary Molloy; Eugene Tsuprun; Yao Qian; David Suendermann-Oeft


conference of the international speech communication association | 2018

Game-based Spoken Dialog Language Learning Applications for Young Students.

Keelan Evanini; Veronika Timpe-Laughlin; Eugene Tsuprun; Ian Blood; Jeremy Lee; James Bruno; Vikram Ramanarayanan; Patrick Lange; David Suendermann-Oeft

Collaboration


Dive into the Patrick Lange's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vikram Ramanarayanan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Yao Qian

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhou Yu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge