Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen B. Chen is active.

Publication


Featured researches published by Karen B. Chen.


Human Factors | 2012

Effect of Touch Screen Button Size and Spacing on Touch Characteristics of Users With and Without Disabilities

Mary E. Sesto; Curtis B. Irwin; Karen B. Chen; Amrish O. Chourasia; Douglas A. Wiegmann

Objective: The aim of this study was to investigate the effect of button size and spacing on touch characteristics (forces, impulses, and dwell times) during a digit entry touch screen task. A secondary objective was to investigate the effect of disability on touch characteristics. Background: Touch screens are common in public settings and workplaces. Although research has examined the effect of button size and spacing on performance, the effect on touch characteristics is unknown. Method: A total of 52 participants (n = 23, fine motor control disability; n = 14, gross motor control disability; n = 15, no disability) completed a digit entry task. Button sizes varied from 10 mm to 30 mm, and button spacing was 1 mm or 3 mm. Results: Touch characteristics were significantly affected by button size. The exerted peak forces increased 17% between the largest and the smallest buttons, whereas impulses decreased 28%. Compared with the fine motor and nondisabled groups, the gross motor group had greater impulses (98% and 167%, respectively) and dwell times (60% and 129%, respectively). Peak forces were similar for all groups. Conclusion: Button size but not spacing influenced touch characteristics during a digit entry task. The gross motor group had significantly greater dwell times and impulses than did the fine motor and nondisabled groups. Application: Research on touch characteristics, in conjunction with that on user performance, can be used to guide human computer interface design strategies to improve accessibility of touch screen interfaces. Further research is needed to evaluate the effect of the exerted peak forces and impulses on user performance and fatigue.


Human Factors | 2013

Effect of sitting or standing on touch screen performance and touch characteristics.

Amrish O. Chourasia; Douglas A. Wiegmann; Karen B. Chen; Curtis B. Irwin; Mary E. Sesto

Objective: The aim of this study was to evaluate the effect of sitting and standing on performance and touch characteristics during a digit entry touch screen task in individuals with and without motor-control disabilities. Background: Previously, researchers of touch screen design have not considered the effect of posture (sitting vs. standing) on touch screen performance (accuracy and timing) and touch characteristics (force and impulse). Method: Participants with motor-control disabilities (n = 15) and without (n = 15) completed a four-digit touch screen number entry task in both sitting and standing postures. Button sizes varied from 10 mm to 30 mm (5-mm increments), and button gap was 3 mm or 5 mm. Results: Participants had more misses and took longer to complete the task during standing for smaller button sizes (<20 mm). At larger button sizes, performance was similar for both sitting and standing. In general, misses, time to complete task, and touch characteristics were increased for standing. Although disability affected performance (misses and timing), similar trends were observed for both groups across posture and button size. Conclusion: Standing affects performance at smaller button sizes (<20 mm). For participants with and without motor-control disabilities, standing led to greater exerted force and impulse. Application: Along with interface design considerations, environmental conditions should also be considered to improve touch screen accessibility and usability.


Journal of Biomechanics | 2015

The accuracy of the oculus rift virtual reality head-mounted display during cervical spine mobility measurement

Xu Xu; Karen B. Chen; Jia-Hua Lin; Robert G. Radwin

An inertial sensor-embedded virtual reality (VR) head-mounted display, the Oculus Rift (the Rift), monitors head movement so the content displayed can be updated accordingly. While the Rift may have potential use in cervical spine biomechanics studies, its accuracy in terms of cervical spine mobility measurement has not yet been validated. In the current study, a VR environment was designed to guide participants to perform prescribed neck movements. The cervical spine kinematics was measured by both the Rift and a reference motion tracking system. Comparison of the kinematics data between the Rift and the tracking system indicated that the Rift can provide good estimates on full range of motion (from one side to the other side) during the performed task. Because of inertial sensor drifting, the unilateral range of motion (from one side to neutral posture) derived from the Rift is more erroneous. The root-mean-square errors over a 1-min task were within 10° for each rotation axis. The error analysis further indicated that the inertial sensor drifted approximately 6° at the beginning of a trial during the initialization. This needs to be addressed when using the Rift in order to more accurately measure cervical spine kinematics. It is suggested that the front cover of the Rift should be aligned against a vertical plane during its initialization.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Influence of altered visual feedback on neck movement for a virtual reality rehabilitative system

Karen B. Chen; Kevin Ponto; Mary E. Sesto; Robert G. Radwin

This paper investigates altering visual feedback during neck movement through control-display (C-D) gain for a head-mounted display, for the purpose of determining the just noticeable difference (JND) for encouraging individuals with kinesiophobia (i.e. fear avoidance of movement due to chronic pain) to effectively perform therapeutic neck exercises. The JND was defined as .25 probability of detecting a difference from unity C-D gain (gain=1). A target-aiming task with two consecutive neck moves per trial was presented; one neck move had varying C-D gain and the other had unity gain. The VR system was able to influence neck moves without changing locations of the target. Participants indicated whether the two neck movements were the same or different. Logistic regression revealed that the JND gains were 0.903 (lower bound) and 1.159 (upper bound) as the participants could not discriminate a 55° turn, ranging from 49.7° to 63.7°. This preliminary study shows that immersive VR with altered visual feedback influenced movement. The feasibility for rehabilitation of individuals with kinesiophobia will next be assessed.


Human Factors | 2014

Manually locating physical and virtual reality objects.

Karen B. Chen; Ryan A. Kimmel; Aaron Bartholomew; Kevin Ponto; Michael Gleicher; Robert G. Radwin

Objective: In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Background: Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Method: Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Results: Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Conclusion: Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Application: Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.


Experimental Gerontology | 2015

Evaluation of older driver head functional range of motion using portable immersive virtual reality

Karen B. Chen; Xu Xu; Jia-Hua Lin; Robert G. Radwin

BACKGROUND The number of drivers over 65 years of age continues to increase. Although neck rotation range has been identified as a factor associated with self-reported crash history in older drivers, it was not consistently reported as indicators of older driver performance or crashes across previous studies. It is likely that drivers use neck and trunk rotation when driving, and therefore the functional range of motion (ROM) (i.e. overall rotation used during a task) of older drivers should be further examined. OBJECTIVE Evaluate older driver performance in an immersive virtual reality, simulated, dynamic driving blind spot target detection task. METHODS A cross-sectional laboratory study recruited twenty-six licensed drivers (14 young between 18 and 35 years, and 12 older between 65 to 75 years) from the local community. Participants were asked to detect targets by performing blind spot check movements while neck and trunk rotation was tracked. Functional ROM, target detection success, and time to detection were analyzed. RESULTS In addition to neck rotation, older and younger drivers on average rotated their trunks 9.96° and 18.04°, respectively. The younger drivers generally demonstrated 15.6° greater functional ROM (p<.001), were nearly twice as successful in target detection due to target location (p=.008), and had 0.46 s less target detection time (p=.016) than the older drivers. CONCLUSION Assessing older driver functional ROM may provide more comprehensive assessment of driving ability than neck ROM. Target detection success and time to detection may also be part of the aging process as these measures differed between driver groups.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017

Use of Virtual Reality Feedback for Patients with Chronic Neck Pain and Kinesiophobia

Karen B. Chen; Mary E. Sesto; Kevin Ponto; James W. Leonard; Andrea H. Mason; Gregg C. Vanderheiden; Justin C. Williams; Robert G. Radwin

This study examined how individuals with and without neck pain performed exercises under the influence of altered visual feedback in virtual reality. Chronic neck pain (n = 9) and asymptomatic (n = 10) individuals were recruited for this cross-sectional study. Participants performed head rotations while receiving programmatically manipulated visual feedback from a head-mounted virtual reality display. The main outcome measure was the control-display gain (ratio between actual head rotation angle and visual rotation angle displayed) recorded at the just-noticeable difference. Actual head rotation angles were measured for different gains. Detection of the manipulated visual feedback was affected by gain. The just-noticeable gain for asymptomatic individuals, below and above unity gain, was 0.903 and 1.159, respectively. Head rotation angle decreased or increased 5.45° for every 0.1 increase or decrease in gain, respectively. The just-noticeable gain for chronic pain individuals, below unity gain, was 0.950. The head rotation angle increased 4.29° for every 0.1 decrease in gain. On average, chronic pain individuals reported that neck rotation was feasible for 84% of the unity gain trials, 66% of the individual just-noticeable difference trials, and 50% of the “nudged” just-noticeable difference trials. This research demonstrated that virtual reality may be useful for promoting the desired outcome of increased range of motion in neck rehabilitation exercises by altering visual feedback.


Human Factors | 2015

Virtual exertions: evoking the sense of exerting forces in virtual reality using gestures and muscle activity.

Karen B. Chen; Kevin Ponto; Ross Tredinnick; Robert G. Radwin

Objective: This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Background: Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Method: Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Results: Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Conclusions: Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Application: Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene.


Applied Ergonomics | 2017

Using the Microsoft Kinect™ to assess 3-D shoulder kinematics during computer use

Xu Xu; Michelle M. Robertson; Karen B. Chen; Jia-Hua Lin; Raymond W. McGorry

Shoulder joint kinematics has been used as a representative indicator to investigate musculoskeletal symptoms among computer users for office ergonomics studies. The traditional measurement of shoulder kinematics normally requires a laboratory-based motion tracking system which limits the field studies. In the current study, a portable, low cost, and marker-less Microsoft Kinect™ sensor was examined for its feasibility on shoulder kinematics measurement during computer tasks. Eleven healthy participants performed a standardized computer task, and their shoulder kinematics data were measured by a Kinect sensor and a motion tracking system concurrently. The results indicated that placing the Kinect sensor in front of the participants would yielded a more accurate shoulder kinematics measurements then placing the Kinect sensor 15° or 30° to one side. The results also showed that the Kinect sensor had a better estimate on shoulder flexion/extension, compared with shoulder adduction/abduction and shoulder axial rotation. The RMSE of front-placed Kinect sensor on shoulder flexion/extension was less than 10° for both the right and the left shoulder. The measurement error of the front-placed Kinect sensor on the shoulder adduction/abduction was approximately 10° to 15°, and the magnitude of error is proportional to the magnitude of that joint angle. After the calibration, the RMSE on shoulder adduction/abduction were less than 10° based on an independent dataset of 5 additional participants. For shoulder axial rotation, the RMSE of front-placed Kinect sensor ranged between approximately 15° to 30°. The results of the study suggest that the Kinect sensor can provide some insight on shoulder kinematics for improving office ergonomics.


Applied Ergonomics | 2018

Immersion of virtual reality for rehabilitation - Review

Tyler Rose; Chang S. Nam; Karen B. Chen

Virtual reality (VR) shows promise in the application of healthcare and because it presents patients an immersive, often entertaining, approach to accomplish the goal of improvement in performance. Eighteen studies were reviewed to understand human performance and health outcomes after utilizing VR rehabilitation systems. We aimed to understand: (1) the influence of immersion in VR performance and health outcomes; (2) the relationship between enjoyment and potential patient adherence to VR rehabilitation routine; and (3) the influence of haptic feedback on performance in VR. Performance measures including postural stability, navigation task performance, and joint mobility showed varying relations to immersion. Limited data did not allow a solid conclusion between enjoyment and adherence, but patient enjoyment and willingness to participate were reported in care plans that incorporates VR. Finally, different haptic devices such as gloves and controllers provided both strengths and weakness in areas such movement velocity, movement accuracy, and path efficiency.

Collaboration


Dive into the Karen B. Chen's collaboration.

Top Co-Authors

Avatar

Robert G. Radwin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kevin Ponto

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Mary E. Sesto

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jia-Hua Lin

United States Department of State

View shared research outputs
Top Co-Authors

Avatar

Xu Xu

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Amrish O. Chourasia

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Douglas A. Wiegmann

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ross Tredinnick

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Tyler Rose

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Curtis B. Irwin

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge