Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dennis Reidsma is active.

Publication


Featured researches published by Dennis Reidsma.


Computational Linguistics | 2008

Reliability measurement without limits

Dennis Reidsma; Jean Carletta

In computational linguistics, a reliability measurement of 0.8 on some statistic such as is widely thought to guarantee that hand-coded data is fit for purpose, with 0.67 to 0.8 tolerable, and lower values suspect. We demonstrate that the main use of such data, machine learning, can tolerate data with low reliability as long as any disagreement among human coders looks like random noise. When the disagreement introduces patterns, however, the machine learner can pick these up just like it picks up the real patterns in the data, making the performance figures look better than they really are. For the range of reliability measures that the field currently accepts, disagreement can appreciably inflate performance figures, and even a measure of 0.8 does not guarantee that what looks like good performance really is. Although this is a commonsense result, it has implications for how we work. At the very least, computational linguists should look for any patterns in the disagreement among coders and assess what impact they will have.


The Visual Computer | 2006

Online and off-line visualization of meeting information and meeting support

Antinus Nijholt; Rutger Rienks; Jakob Zwiers; Dennis Reidsma

In current meeting research we see modest attempts to visualize the information that has been obtained by either capturing and, probably more importantly, by interpreting the activities that take place during a meeting. The meetings being considered take place in smart meeting rooms. Cameras, microphones and other sensors capture meeting activities. Captured information can be stored and retrieved. Captured information can also be manipulated and in turn displayed on different media. We survey our research in this area, look at issues that deal with turn-taking and gaze behavior of meeting participants, influence and talkativeness, and virtual embodied representations of meeting participants. We stress that this information is interesting not only for real-time meeting support, but also for remote participants and off-line consultation of meeting information.


intelligent virtual agents | 2006

Towards a reactive virtual trainer

Zsófia Ruttkay; Job Zwiers; Herwin van Welbergen; Dennis Reidsma

A Reactive Virtual Trainer (RVT) is an Intelligent Virtual Agent (IVA) capable of presenting physical exercises that are to be performed by a human, monitoring the user and providing feedback at different levels. Depending on the motivation and the application context, the exercises may be general ones of fitness to improve the users physical condition, special exercises to be performed from time to time during work to prevent for example RSI, or physiotherapy exercises with medical indications. In the paper we discuss the functional and technical requirements of a framework which can be used to author specific RVT applications. The focus is on the reactivity of the RVT, manifested in natural language comments on readjusting the tempo, pointing out mistakes or rescheduling the exercises. We outline the components we have implemented so far: our animation engine, the composition of exercises from basic motions and the module for analysis of tempo in acoustic input.


Ai & Society | 2007

Virtual meeting rooms: from observation to simulation

Dennis Reidsma; Rieks op den Akker; Rutger Rienks; Ronald Walter Poppe; Anton Nijholt; Dirk Heylen; Job Zwiers

Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior. As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation.


Archive | 2012

Advances in Computer Entertainment.

Anton Nijholt; Teresa Romão; Dennis Reidsma

These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant areas of interest in modern society and is amongst the fastest growing industries in the world. ACE 2012 will bring together leading researchers and practitioners from academia and industry to present their innovative work and discuss all aspects and challenges of interactive entertainment technology, in an exciting, cultural, and stimulating environment. ACE is by nature a multidisciplinary conference, therefore attracting people from across a wide spectrum of interests and disciplines including computer science, design, arts, sociology, anthropology, psychology, and marketing. The main goal of ACE is to stimulate discussion in the development of new and compelling entertainment computing and interactive art concepts and applications. At ACE conferences participants are encouraged to present work they believe will shape the future, going beyond the established paradigms, and focusing on all areas related to interactive entertainment. This is the 9th ACE conference, and the first time that such an entertainment computing conference is being held in the emerging world. The theme of ACE 2012 is “Entertaining the Whole World”. Kathmandu in Nepal (“The Roof of the World”) has been chosen as venue. In line with the theme ACE 2012 will emphasize the use of easily available technology. Technology for entertainment design is becoming cheap or even extremely cheap. Designing interactive entertainment with commercial off-the-shelf technology (cheap sensors, Kinect, Arduino, etc.) is becoming regular business. How can we use this development to invent yet more new ways of harnessing the entertainment power of creating? Can we convert consumers of entertainment into creators of entertainment, where the process of creating is maybe as important as the resulting product? Young people in emerging markets can become creators as well as consumers of digital entertainment. They can distribute their work through apps and the internet, and through media creativity benefit their country and economy. We wish to strike up discussions and initiate projects which will benefit the emerging world through digital entertainment.


Lecture Notes in Computer Science | 2008

Mutually Coordinated Anticipatory Multimodal Interaction

Anton Nijholt; Dennis Reidsma; Herwin van Welbergen; Rieks op den Akker; Zsófia Ruttkay

We introduce our research on anticipatory and coordinated interaction between a virtual human and a human partner. Rather than adhering to the turn taking paradigm, we choose to investigate interaction where there is simultaneous expressive behavior by the human interlocutor and a humanoid. Various applications in which we can study and specify such behavior, in particular behavior that requires synchronization based on predictions from performance and perception, are presented. Some observations concerning the role of predictions in conversations are presented and architectural consequences for the design of virtual humans are drawn.


Journal on Multimodal User Interfaces | 2011

Continuous Interaction with a Virtual Human

Dennis Reidsma; Iwan de Kok; Daniel Neiberg; Sathish Pammi; Bart van Straalen; Khiet Phuong Truong; Herwin van Welbergen

This paper presents our progress in developing a Virtual Human capable of being an attentive speaker. Such a Virtual Human should be able to attend to its interaction partner while it is speaking—and modify its communicative behavior on-the-fly based on what it observes in the behavior of its partner. We report new developments concerning a number of aspects, such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and strategies for generating appropriate reactions to listener responses. On the basis of this progress, a task-based setup for a responsive Virtual Human was implemented to carry out two user studies, the results of which are presented and discussed in this paper.


intelligent virtual agents | 2010

Backchannel strategies for artificial listeners

Ronald Walter Poppe; Khiet Phuong Truong; Dennis Reidsma; Dirk Heylen

We evaluate multimodal rule-based strategies for backchannel (BC) generation in face-to-face conversations. Such strategies can be used by artificial listeners to determine when to produce a BC in dialogs with human speakers. In this research, we consider features from the speakers speech and gaze. We used six rule-based strategies to determine the placement of BCs. The BCs were performed by an intelligent virtual agent using nods and vocalizations. In a user perception experiment, participants were shown video fragments of a human speaker together with an artificial listener who produced BC behavior according to one of the strategies. Participants were asked to rate how likely they thought the BC behavior had been performed by a human listener. We found that the number, timing and type of BC had a significant effect on how human-like the BC behavior was perceived.


intelligent virtual agents | 2012

An incremental multimodal realizer for behavior co-articulation and coordination

Herwin van Welbergen; Dennis Reidsma; Stefan Kopp

Human conversations are highly dynamic, responsive interactions. To enter into flexible interactions with humans, a conversational agent must be capable of fluent incremental behavior generation. New utterance content must be integrated seamlessly with ongoing behavior, requiring dynamic application of co-articulation. The timing and shape of the agents behavior must be adapted on-the-fly to the interlocutor, resulting in natural interpersonal coordination. We present AsapRealizer, a BML 1.0 behavior realizer that achieves these capabilities by building upon, and extending, two state of the art existing realizers, as the result of a collaboration between two research groups.


The Missouri Review | 2006

Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

H. van Welbergen; J. Hendler; D. Goren-Bar; Antinus Nijholt; Dennis Reidsma; O. Mayora-Ibarra; Jakob Zwiers

Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.

Collaboration


Dive into the Dennis Reidsma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge