Ronald Walter Poppe
University of Twente
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ronald Walter Poppe.
Image and Vision Computing | 2010
Ronald Walter Poppe
Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.
Computer Vision and Image Understanding | 2007
Ronald Walter Poppe
Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human-Computer Interaction and automatic annotation, will benefit from a robust solution. In this paper, we discuss the characteristics of human motion analysis. We divide the analysis into a modeling and an estimation phase. Modeling is the construction of the likelihood function, estimation is concerned with finding the most likely pose given the likelihood surface. We discuss model-free approaches separately. This taxonomy allows us to highlight trends in the domain and to point out limitations of the current state of the art.
international joint conference on artificial intelligence | 2007
Ronald Walter Poppe; Rutger Rienks; Betsy van Dijk
Current evaluation methods are inappropriate for emerging HCI applications. In this paper, we give three examples of these applications and show that traditional evaluation methods fail. We identify trends in HCI development and discuss the issues that arise with evaluation.We aim at achieving increased awareness that evaluation too has to evolve in order to support the emerging trends in HCI systems.
Ai & Society | 2007
Dennis Reidsma; Rieks op den Akker; Rutger Rienks; Ronald Walter Poppe; Anton Nijholt; Dirk Heylen; Job Zwiers
Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior. As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation.
intelligent virtual agents | 2010
Ronald Walter Poppe; Khiet Phuong Truong; Dennis Reidsma; Dirk Heylen
We evaluate multimodal rule-based strategies for backchannel (BC) generation in face-to-face conversations. Such strategies can be used by artificial listeners to determine when to produce a BC in dialogs with human speakers. In this research, we consider features from the speakers speech and gaze. We used six rule-based strategies to determine the placement of BCs. The BCs were performed by an intelligent virtual agent using nods and vocalizations. In a user perception experiment, participants were shown video fragments of a human speaker together with an artificial listener who produced BC behavior according to one of the strategies. Participants were asked to rate how likely they thought the BC behavior had been performed by a human listener. We found that the number, timing and type of BC had a significant effect on how human-like the BC behavior was perceived.
international conference on machine learning | 2008
Boris Reuderink; Mannes Poel; Khiet Phuong Truong; Ronald Walter Poppe; Maja Pantic
Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed by fusing the results of separate audio and video classifiers on the decision level. This results in laughter detection with a significantly higher AUC-ROC than single-modality classification.
human factors in computing systems | 2006
Dennis Reidsma; Anton Nijholt; Ronald Walter Poppe; Rutger Rienks; Hendri Hondorp
This paper presents a virtual rap dancer that is able to dance to the beat of music coming in from music recordings, beats obtained from music, voice or other input through a microphone, motion beats detected in the video stream of a human dancer, or motions detected from a dance mat. The rap dancers moves are generated from a lexicon that was derived manually from the analysis of the video clips of rap songs performed by various rappers. The system allows for adaptation of the moves in the lexicon on the basis of style parameters. The rap dancer invites a user to dance along with the music.
IEEE Pervasive Computing | 2013
Alejandro Moreno; van Robby Delden; Ronald Walter Poppe; Dennis Reidsma
Interactive playgrounds are technology-enhanced installations that aim to provide rich game experiences for children by combining the benefits of traditional playgrounds with those of digital games. These game experiences could be attained by addressing three design considerations: context-awareness, adaptability, and personalization. The authors propose using social signal processing (SSP) to enhance current interactive playgrounds to meet these criteria. This article surveys how SSP techniques can help playgrounds automatically sense and interpret childrens social interactions, adapt game mechanics to induce targeted social behavior, and learn from the sensed behavior to meet players expectations and desires.
ambient intelligence | 2009
Antinus Nijholt; Dennis Reidsma; Ronald Walter Poppe
In future ambient intelligence (AmI) environments we assume intelligence embedded in the environment and its objects (floors, furniture, mobile robots). These environments support their human inhabitants in their activities and interactions by perceiving them through sensors (proximity sensors, cameras, microphones). Health, recreation, sports, and games are among the needs of inhabitants. The environments can detect and interpret human activity, and can give multimedia feedback to invite, stimulate, guide, advise, and engage. The purpose of the activity can be improving physical and mental health (wellbeing) as it improving capabilities related to a profession, recreation, or sports. Fun, just fun, to be achieved from interaction can be another aim of such environments and is the focus of this chapter. We present several examples that span the concept of entertainment in ambient intelligence environments, both within and beyond the (smart) home. In our survey, we identify some main dimensions of ambient entertainment. Next we turn to the design of entertainment applications. We explain in depth which factors are important to consider when designing for entertainment rather than for work.
tests and proofs | 2010
Rutger Rienks; Ronald Walter Poppe; Dirk Heylen
An experiment was conducted to investigate whether human observers use knowledge of the differences in focus of attention in multiparty interaction to identify the speaker amongst the meeting participants. A virtual environment was used to have good stimulus control. Head orientations were displayed as the only cue for focus attention. The orientations were derived from a corpus of tracked head movements. We present some properties of the relation between head orientations and speaker--listener status, as found in the corpus. With respect to the experiment, it appears that people use knowledge of the patterns in focus of attention to distinguish the speaker from the listeners. However, the human speaker identification results were rather low. Head orientations (or focus of attention) alone do not provide a sufficient cue for reliable identification of the speaker in a multiparty setting.