Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiufeng Lin is active.

Publication


Featured researches published by Qiufeng Lin.


applied perception in graphics and visualization | 2010

A system for exploring large virtual environments that combines scaled translational gain and interventions

Xianshi Xie; Qiufeng Lin; Haojie Wu; Gayathri Narasimham; Timothy P. McNamara; John J. Rieser; Bobby Bodenheimer

This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a users locations in physical space while preserving their spatial awareness of the virtual space. This latter technique is called resetting. In two experiments, we evaluate both scaling the translational gain and resetting while a subject locomotes along a path and then turns to face a remembered object. We find that the two techniques can be effectively combined, although there is a cognitive cost to resetting.


applied perception in graphics and visualization | 2011

Egocentric distance perception in real and HMD-based virtual environments: the effect of limited scanning method

Qiufeng Lin; Xianshi Xie; Aysu Erdemir; Gayathri Narasimham; Timothy P. McNamara; John J. Rieser; Bobby Bodenheimer

We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over distances of 4 meters to 7 meters. The field of view varied between 21.1° and 13.6°. The scan direction varied between near-to-far scanning and far-to-near scanning. The blind walking method varied between direct blind walking and an indirect method of blind walking that matched the geometry of our laboratory. We varied self-representation between having a self-avatar (a fully tracked, animated, and first-person perspective of the user), having a static avatar (a mannequin avatar that did not move), to having no avatar (a disembodied camera view of the virtual environment). In the real environment, we find an effect of field of view; participants performed more accurately with larger field of view. In both real and virtual environments, we find an effect of blind walking method; participants performed more accurately in direct blind walking. We do not find an effect of distance underestimation in any environment, nor do we find an effect of self-representation.


acm symposium on applied perception | 2013

Stepping off a ledge in an HMD-based immersive virtual environment

Qiufeng Lin; John J. Rieser; Bobby Bodenheimer

We explore whether a gender-matched, calibrated self-avatar affects the perception of the affordance of stepping off of a ledge, or visual cliff, in an immersive virtual environment. Visual cliffs form demonstrations in many immersive virtual environments because they create compelling environments. Understanding the role that self-avatars contribute to making such environments compelling is an important problem. We conducted an experiment to find the threshold at which subjects on a ledge in an immersive virtual environment would report that they could step gracefully off of the ledge without losing their balance, and compared the threshold height at which their decision changed under the condition of having and not having a self-avatar. The results show that people unrealistically say they can step off a ledge that is approximately 50% of their eyeheight without a self-avatar, and realistically about 25% of their eyeheight with a self-avatar.


acm symposium on applied perception | 2012

Stepping over and ducking under: the influence of an avatar on locomotion in an HMD-based immersive virtual environment

Qiufeng Lin; John J. Rieser; Bobby Bodenheimer

The purpose of this study was to learn if self-avatars influence peoples perception and action in virtual environments. People viewed two situations in a virtual environment through a head-mounted display and were asked to decide how they would act. In one situation their task was to imagine walking across a room which was divided by a horizontal bar. The bars height was varied and the task was for people to say whether they would need to step over the bar or duck under it. In the other situation the task was to imagine walking through a doorway. The doorways height was varied and the task was for people to say whether they could walk straight through the doorway or would need to duck to pass through. One-half the participants viewed the situations with a self-avatar and the others viewed it without an avatar. The height of the avatar was varied, so that it either equaled the participants height or it was 15% taller. The results showed statistically significant effects of the avatar in the horizontal bar situation: their step-over or duck-under decision points were about 12% higher when the self-avatar was rendered taller. In the doorway situation the effect of the avatar was statistically non-significant. The step-over step-under task is a promising method to study perception and action in virtual environments.


tests and proofs | 2015

Affordance Judgments in HMD-Based Virtual Environments: Stepping over a Pole and Stepping off a Ledge

Qiufeng Lin; John J. Rieser; Bobby Bodenheimer

People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares people’s judgments of affordances on two tasks in both the real world and in virtual environments presented with head-mounted displays. The two tasks were stepping over or ducking under a pole, and stepping straight off of a ledge. Comparisons between the real world and virtual environments are important because they allow us to evaluate the fidelity of virtual environments. Another reason is that virtual environment technologies enable precise control of the myriad perceptual cues at work in the physical world and deepen our understanding of how people use vision to decide how to act. In the experiments presented here, the presence or absence of a self-avatar—an animated graphical representation of a person embedded in the virtual environment—was a central factor. Another important factor was the presence or absence of action, that is, whether people performed the task or reported that they could or could not perform the task. The results show that animated self-avatars provide critical information for people deciding what they can and cannot do in virtual environments, and that action is significant in people’s affordance judgments.


Proceedings of SPIE--the International Society for Optical Engineering | 2013

Immersive Virtual Reality for Visualization of Abdominal CT.

Qiufeng Lin; Zhoubing Xu; Bo Li; Rebeccah B. Baucom; Benjamin K. Poulose; Bennett A. Landman; Robert E. Bodenheimer

Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.


human-robot interaction | 2012

Immersion with robots in large virtual environments

Xianshi Xie; Qiufeng Lin; Haojie Wu; Julie A. Adams; Bobby Bodenheimer

This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected walking. An evaluation assessed the conditions under which subjects found the system to be the most immersive.


applied perception in graphics and visualization | 2011

Egocentric distance perception in HMD-based virtual environments

Qiufeng Lin; Xianshi Xie; Aysu Erdemir; Gayathri Narasimham; Timothy P. McNamara; John J. Rieser; Bobby Bodenheimer

We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.


Proceedings of SPIE | 2013

Immersive Virtual Reality for Visualization of Abdominal CT

Qiufeng Lin; Zhoubing Xu; Bo Li; Rebeccah B. Baucom; Benjamin K. Poulose; Bennett A. Landman; Robert E. Bodenheimer

Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.


Proceedings of SPIE | 2013

Immersive virtual reality for visualization of abdominal CT

Qiufeng Lin; Zhoubing Xu; Bo Li; Rebeccah B. Baucom; Benjamin K. Poulose; Bennett A. Landman; Robert E. Bodenheimer

Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

Collaboration


Dive into the Qiufeng Lin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin K. Poulose

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rebeccah B. Baucom

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge