Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chao Qu is active.

Publication


Featured researches published by Chao Qu.


Presence: Teleoperators & Virtual Environments | 2012

Effects of stereoscopic viewing on presence, anxiety, and cybersickness in a virtual reality environment for public speaking

Y Yun Ling; Willem-Paul Brinkman; Harold T. Nefs; Chao Qu; Iej Ingrid Heynderickx

In this study, we addressed the effect of stereoscopy on presence, anxiety, and cybersickness in a virtual public speaking world, and investigated the relationships between these three variables. Our results question the practical relevance of applying stereoscopy in head-mounted displays (HMDs) for virtual reality exposure therapy (VRET) in a virtual public speaking world. In VRET, feelings of presence improve the efficacy (B. K. Wiederhold & M. D. Wiederhold, 2005). There are reports of a relatively large group of dropouts during VRET at low levels of presence in the virtual environment (Krijn, Emmelkamp, Olafsson, & Biemond, 2004). Therefore, generating an adequate level of presence is essential for the success of VRET. In this study, 86 participants were recruited and they were immersed in the virtual public speaking world twice: once with stereoscopic rendering and once without stereoscopic rendering. The results showed that spatial presence was significantly improved by adding stereoscopy, but no difference for reported involvement or realism was found. The heart rate measurements also showed no difference between stereoscopic and nonstereoscopic viewing. Participants reported similar anxiety feelings about their talk and similar level of cybersickness in both viewing modes. Even though spatial presence was significantly improved, the size of statistical effect was relatively small. Our results therefore suggest that stereoscopic rendering may not be of practical importance for VRET in public speaking settings.


Computers in Human Behavior | 2014

Conversations with a virtual human: Synthetic emotions and human responses

Chao Qu; Willem-Paul Brinkman; Y Yun Ling; Pascal Wiggers; Iej Ingrid Heynderickx

To test whether synthetic emotions expressed by a virtual human elicit positive or negative emotions in a human conversation partner and affect satisfaction towards the conversation, an experiment was conducted where the emotions of a virtual human were manipulated during both the listening and speaking phase of the dialogue. Twenty-four participants were recruited and were asked to have a real conversation with the virtual human on six different topics. For each topic the virtual human’s emotions in the listening and speaking phase were different, including positive, neutral and negative emotions. The results support our hypotheses that (1) negative compared to positive synthetic emotions expressed by a virtual human can elicit a more negative emotional state in a human conversation partner, (2) synthetic emotions expressed in the speaking phase have more impact on a human conversation partner than emotions expressed in the listening phase, (3) humans with less speaking confidence also experience a conversation with a virtual human as less positive, and (4) random positive or negative emotions of a virtual human have a negative effect on the satisfaction with the conversation. These findings have practical implications for the treatment of social anxiety as they allow therapists to control the anxiety evoking stimuli, i.e., the expressed emotion of a virtual human in a virtual reality exposure environment of a simulated conversation. In addition, these findings may be useful to other virtual applications that include conversations with a virtual human.


Presence: Teleoperators & Virtual Environments | 2013

The effect of priming pictures and videos on a question--answer dialog scenario in a virtual environment

Chao Qu; Willem-Paul Brinkman; Pascal Wiggers; Ingrid Heynderickx

Having a free-speech conversation with avatars in a virtual environment can be desirable in virtual reality applications, such as virtual therapy and serious games. However, recognizing and processing free speech seems too ambitious to realize with the current technology. As an alternative, pre-scripted conversations with keyword detection can handle a number of goal-oriented situations, as well as some scenarios in which the conversation content is of secondary importance. This is, for example, the case in virtual exposure therapy for the treatment of people with social phobia, where conversation is for exposure and anxiety arousal only. A drawback of pre-scripted dialog is the limited scope of the users answers. The system cannot handle a users response that does not match the pre-defined content, other than by providing a default reply. A new method, which uses priming material to restrict the possibility of the users response, is proposed in this paper to solve this problem. Two studies were conducted to investigate whether people can be guided to mention specific keywords with video and/or picture primings. Study 1 was a two-by-two experiment in which participants (n 20) were asked to answer a number of open questions. Prior to the session, participants watched priming videos or unrelated videos. During the session, they could see priming pictures or unrelated pictures on a whiteboard behind the person who asked the questions. The results showed that participants tended to mention more keywords both with priming videos and pictures. Study 2 shared the same experimental setting but was carried out in virtual reality instead of in the real world. Participants (n 20) were asked to answer questions of an avatar when they were exposed to priming material, before and/or during the conversation session. The same results were found: the surrounding media content had a guidance effect. Furthermore, when priming pictures appeared in the environment, people sometimes forgot to mention the content they typically would mention.


PLOS ONE | 2015

Virtual Bystanders in a Language Lesson: Examining the Effect of Social Evaluation, Vicarious Experience, Cognitive Consistency and Praising on Students' Beliefs, Self-Efficacy and Anxiety in a Virtual Reality Environment

Chao Qu; Y Yun Ling; Iej Ingrid Heynderickx; Willem-Paul Brinkman

Bystanders in a real worlds social setting have the ability to influence people’s beliefs and behavior. This study examines whether this effect can be recreated in a virtual environment, by exposing people to virtual bystanders in a classroom setting. Participants (n = 26) first witnessed virtual students answering questions from an English teacher, after which they were also asked to answer questions from the teacher as part of a simulated training for spoken English. During the experiment the attitudes of the other virtual students in the classroom was manipulated; they could whisper either positive or negative remarks to each other when a virtual student was talking or when a participant was talking. The results show that the expressed attitude of virtual bystanders towards the participants affected their self-efficacy, and their avoidance behavior. Furthermore, the experience of witnessing bystanders commenting negatively on the performance of other students raised the participants’ heart rate when it was their turn to speak. Two-way interaction effects were also found on self-reported anxiety and self-efficacy. After witnessing bystanders’ positive attitude towards peer students, participants’ self-efficacy when answering questions received a boost when bystanders were also positive towards them, and a blow when bystanders reversed their attitude by being negative towards them. Still, inconsistency, instead of consistency, between the bystanders’ attitudes towards virtual peers and the participants was not found to result in a larger change in the participants’ beliefs. Finally the results also reveal that virtual flattering or destructive criticizing affected the participants’ beliefs not only about the virtual bystanders, but also about the neutral teacher. Together these findings show that virtual bystanders in a classroom can affect people’s beliefs, anxiety and behavior.


Virtual Reality | 2013

Human perception of a conversational virtual human: an empirical study on the effect of emotion and culture

Chao Qu; Willem-Paul Brinkman; Y Yun Ling; Pascal Wiggers; Iej Ingrid Heynderickx

Virtual reality applications with virtual humans, such as virtual reality exposure therapy, health coaches and negotiation simulators, are developed for different contexts and usually for users from different countries. The emphasis on a virtual human’s emotional expression depends on the application; some virtual reality applications need an emotional expression of the virtual human during the speaking phase, some during the listening phase and some during both speaking and listening phases. Although studies have investigated how humans perceive a virtual human’s emotion during each phase separately, few studies carried out a parallel comparison between the two phases. This study aims to fill this gap, and on top of that, includes an investigation of the cultural interpretation of the virtual human’s emotion, especially with respect to the emotion’s valence. The experiment was conducted with both Chinese and non-Chinese participants. These participants were asked to rate the valence of seven different emotional expressions (ranging from negative to neutral to positive during speaking and listening) of a Chinese virtual lady. The results showed that there was a high correlation in valence rating between both groups of participants, which indicated that the valence of the emotional expressions was as easily recognized by people from a different cultural background as the virtual human. In addition, participants tended to perceive the virtual human’s expressed valence as more intense in the speaking phase than in the listening phase. The additional vocal emotional expression in the speaking phase is put forward as a likely cause for this phenomenon.


PLOS ONE | 2013

The Effect of Perspective on Presence and Space Perception

Y Yun Ling; Harold T. Nefs; Willem-Paul Brinkman; Chao Qu; Iej Ingrid Heynderickx

In this paper we report two experiments in which the effect of perspective projection on presence and space perception was investigated. In Experiment 1, participants were asked to score a presence questionnaire when looking at a virtual classroom. We manipulated the vantage point, the viewing mode (binocular versus monocular viewing), the display device/screen size (projector versus TV) and the center of projection. At the end of each session of Experiment 1, participants were asked to set their preferred center of projection such that the image seemed most natural to them. In Experiment 2, participants were asked to draw a floor plan of the virtual classroom. The results show that field of view, viewing mode, the center of projection and display all significantly affect presence and the perceived layout of the virtual environment. We found a significant linear relationship between presence and perceived layout of the virtual classroom, and between the preferred center of projection and perceived layout. The results indicate that the way in which virtual worlds are presented is critical for the level of experienced presence. The results also suggest that people ignore veridicality and they experience a higher level of presence while viewing elongated virtual environments compared to viewing the original intended shape.


european conference on cognitive ergonomics | 2010

The role of display technology and individual differences on presence

Y Yun Ling; Harold T. Nefs; Willem-Paul Brinkman; Iej Ingrid Heynderickx; Chao Qu

Originality/Value -- Having a better understanding of the relation between human factors and feelings of presence may facilitate the selection of people that are most likely to benefit from virtual reality applications such as virtual reality exposure therapy (e.g. Krijn et al, 2004). A better understanding of how presence can be optimized on different displays, may also lead to the possibility to use less complex display types (as compared to HMDs or CAVEs) to create virtual reality consumer applications. It also opens the possibility to tailor the virtual reality display to the individual, optimizing presence. Research approach -- First, we investigate the relationships between perceived presence and some human factors, including stereoscopic ability, depth impression, and personality. We describe this experiment here in some detail. Second, we focus on the potential maximum presence that can be obtained for specific devices, for example, by manipulating the size, perspective and viewing distance. Third, we will investigate how monocular depth cues can be used to maximize presence for different display types. Finally, we will look specifically at how presence can be maximized on small hand-held devices, for example by incorporating compensation for display movement. In all our experiments we will focus on public speaking and person-to-avatar communication. Presence is measured in three different ways: 1) through questionnaires, 2) behaviourally, and 3) physiologically. Motivation--Several factors such as the kind of display technology and the level of user interaction have been found to affect presence (e.g., IJsselsteijn et al, 2000). Generally, it had been concluded that the more immersive types of display result in higher levels of presence. However, studies comparing the effect of display technology on presence are mostly based on rendering the same content across different displays. Previous studies have typically not attempted to optimize the content for each display type individually. Furthermore, it has not been considered before that some viewers may not benefit as much as others from higher levels of technology.


european conference on cognitive ergonomics | 2010

Visual priming to improve keyword detection in free speech dialogue

Chao Qu; Willem-Paul Brinkman; Pascal Wiggers; Ingrid Heynderickx

Motivation -- Talking out loud with synthetic characters in a virtual world is currently considered as a treatment for social phobic patients. The use of keyword detection, instead of full speech recognition will make the system more robust. Important therefore is the need to increase the chance that users use specific keywords during their conversation. Research approach -- A two by two experiment, in which participants (n = 20) were asked to answer a number of open questions. Prior to the session participants watched priming videos or unrelated videos. Furthermore, during the session they could see priming pictures or unrelated pictures on a whiteboard behind the person who asked the questions. Findings/Design -- Initial results suggest that participants more often mention specific keywords in their answers when they see priming pictures or videos instead of unrelated pictures or videos. Research limitations/Implications -- If visual priming in the background can increase the chance that people use specific keywords in their discussion with a dialogue partner, it might be possible to create dialogues in a virtual environment which users perceive as natural. Take away message -- Visual priming might be able to steer peoples answers in a dialogue.


Computers in Human Behavior | 2013

The relationship between individual characteristics and experienced presence

Y Yun Ling; Harold T. Nefs; Willem-Paul Brinkman; Chao Qu; Iej Ingrid Heynderickx


Archive | 2010

Developing a Dialogue Editor to Script Interaction between Virtual Characters and Social Phobic Patient

Niels ter Heijden; Chao Qu; Pascal Wiggers; Willem-Paul Brinkman

Collaboration


Dive into the Chao Qu's collaboration.

Top Co-Authors

Avatar

Willem-Paul Brinkman

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Iej Ingrid Heynderickx

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Y Yun Ling

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Harold T. Nefs

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pascal Wiggers

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yun Ling

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Joost Broekens

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge