Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rutger Rienks is active.

Publication


Featured researches published by Rutger Rienks.


international conference on multimodal interfaces | 2006

Detection and application of influence rankings in small group meetings

Rutger Rienks; Dong Zhang; Daniel Gatica-Perez; Wilfried Post

We address the problem of automatically detecting participants influence levels in meetings. The impact and social psychological background are discussed. The more influential a participant is, the more he or she influences the outcome of a meeting. Experiments on 40 meetings show that application of statistical (both dynamic and static) models while using simply obtainable features results in a best prediction performance of 70.59% when using a static model, a balanced training set, and three discrete classes: high, normal and low. Application of the detected levels are shown in various ways i.e. in a virtual meeting environment as well as in a meeting browser system.


The Visual Computer | 2006

Online and off-line visualization of meeting information and meeting support

Antinus Nijholt; Rutger Rienks; Jakob Zwiers; Dennis Reidsma

In current meeting research we see modest attempts to visualize the information that has been obtained by either capturing and, probably more importantly, by interpreting the activities that take place during a meeting. The meetings being considered take place in smart meeting rooms. Cameras, microphones and other sensors capture meeting activities. Captured information can be stored and retrieved. Captured information can also be manipulated and in turn displayed on different media. We survey our research in this area, look at issues that deal with turn-taking and gaze behavior of meeting participants, influence and talkativeness, and virtual embodied representations of meeting participants. We stress that this information is interesting not only for real-time meeting support, but also for remote participants and off-line consultation of meeting information.


spoken language technology workshop | 2006

DIALOGUE-ACT TAGGING USING SMART FEATURE SELECTION; RESULTS ON MULTIPLE CORPORA

Daan Verbree; Rutger Rienks; Dirk Heylen

This paper presents an overview of our on-going work on dialogue-act classification. Results are presented on the ICSI, switchboard, and on a selection of the AMI corpus, setting a baseline for forthcoming research. For these corpora the best accuracy scores obtained are 89.27%, 65.68% and 59.76%, respectively. We introduce a smart compression technique for feature selection and compare the performance from a subset of the AMI transcriptions with AMI-ASR output for the same subset.


international joint conference on artificial intelligence | 2007

Evaluating the future of HCI: challenges for the evaluation of emerging applications

Ronald Walter Poppe; Rutger Rienks; Betsy van Dijk

Current evaluation methods are inappropriate for emerging HCI applications. In this paper, we give three examples of these applications and show that traditional evaluation methods fail. We identify trends in HCI development and discuss the issues that arise with evaluation.We aim at achieving increased awareness that evaluation too has to evolve in order to support the emerging trends in HCI systems.


Ai & Society | 2007

Virtual meeting rooms: from observation to simulation

Dennis Reidsma; Rieks op den Akker; Rutger Rienks; Ronald Walter Poppe; Anton Nijholt; Dirk Heylen; Job Zwiers

Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior. As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation.


Ai & Society | 2008

Pro-active meeting assistants: attention please!

Rutger Rienks; Anton Nijholt; Paulo Barthelmess

This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants, and whether the meeting effectiveness and efficiency can be improved by an assistant at all. This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants, and whether the meeting effectiveness and efficiency can be improved by an assistant at all.


human factors in computing systems | 2006

Virtual rap dancer: invitation to dance

Dennis Reidsma; Anton Nijholt; Ronald Walter Poppe; Rutger Rienks; Hendri Hondorp

This paper presents a virtual rap dancer that is able to dance to the beat of music coming in from music recordings, beats obtained from music, voice or other input through a microphone, motion beats detected in the video stream of a human dancer, or motions detected from a dance mat. The rap dancers moves are generated from a lexicon that was derived manually from the analysis of the video clips of rap songs performed by various rappers. The system allows for adaptation of the moves in the lexicon on the basis of style parameters. The rap dancer invites a user to dance along with the music.


tests and proofs | 2010

Differences in head orientation behavior for speakers and listeners: An experiment in a virtual environment

Rutger Rienks; Ronald Walter Poppe; Dirk Heylen

An experiment was conducted to investigate whether human observers use knowledge of the differences in focus of attention in multiparty interaction to identify the speaker amongst the meeting participants. A virtual environment was used to have good stimulus control. Head orientations were displayed as the only cue for focus attention. The orientations were derived from a corpus of tracked head movements. We present some properties of the relation between head orientations and speaker--listener status, as found in the corpus. With respect to the experiment, it appears that people use knowledge of the patterns in focus of attention to distinguish the speaker from the listeners. However, the human speaker identification results were rather low. Head orientations (or focus of attention) alone do not provide a sufficient cue for reliable identification of the speaker in a multiparty setting.


Perception | 2007

Accuracy of Head Orientation Perception in Triadic Situations: Experiment in a Virtual Environment

Ronald Walter Poppe; Rutger Rienks; Dirk Heylen

Research has revealed high accuracy in the perception of gaze in dyadic (sender–receiver) situations. Triadic situations differ from these in that an observer has to report where a sender is looking, not relative to himself. This is more difficult owing to the less favourable position of the observer. The effect of the position of the observer on the accuracy of the identification of the senders looking direction is relatively unexplored. Here, we investigate this, focusing exclusively on head orientation. We used a virtual environment to ensure good stimulus control. We found a mean angular error close to 5°. A higher observer viewpoint results in more accurate identification. Similarly, a viewpoint with a smaller angle to the senders midsagittal plane leads to an improvement in identification performance. Also, we found an effect of underestimation of the error in horizontal direction, similar to findings for dyadic situations.


international conference on machine learning | 2006

Audio-Visual processing in meetings: seven questions and current AMI answers

Marc Al-Hames; Thomas Hain; Jan Cernocky; Sascha Schreiber; Mannes Poel; Ronald Müller; Sébastien Marcel; David A. van Leeuwen; Jean-Marc Odobez; Sileye Ba; Hervé Bourlard; Fabien Cardinaux; Daniel Gatica-Perez; Adam Janin; Petr Motlicek; Stephan Reiter; Steve Renals; Jeroen van Rest; Rutger Rienks; Gerhard Rigoll; Kevin Smith; Andrew Thean; Pavel Zemcik

The project Augmented Multi-party Interaction (AMI) is concerned with the development of meeting browsers and remote meeting assistants for instrumented meeting rooms – and the required component technologies R&D themes: group dynamics, audio, visual, and multimodal processing, content abstraction, and human-computer interaction. The audio-visual processing workpackage within AMI addresses the automatic recognition from audio, video, and combined audio-video streams, that have been recorded during meetings. In this article we describe the progress that has been made in the first two years of the project. We show how the large problem of audio-visual processing in meetings can be split into seven questions, like “Who is acting during the meeting?”. We then show which algorithms and methods have been developed and evaluated for the automatic answering of these questions.

Collaboration


Dive into the Rutger Rienks's collaboration.

Top Co-Authors

Avatar

Dirk Heylen

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Heylen

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Steve Renals

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Thomas Hain

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Jan Cernocky

Brno University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge