Rachael Brady
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rachael Brady.
IEEE Transactions on Visualization and Computer Graphics | 2012
Ryan P. McMahan; Doug A. Bowman; David J. Zielinski; Rachael Brady
In recent years, consumers have witnessed a technological revolution that has delivered more-realistic experiences in their own homes through high-definition, stereoscopic televisions and natural, gesture-based video game consoles. Although these experiences are more realistic, offering higher levels of fidelity, it is not clear how the increased display and interaction aspects of fidelity impact the user experience. Since immersive virtual reality (VR) allows us to achieve very high levels of fidelity, we designed and conducted a study that used a six-sided CAVE to evaluate display fidelity and interaction fidelity independently, at extremely high and low levels, for a VR first-person shooter (FPS) game. Our goal was to gain a better understanding of the effects of fidelity on the user in a complex, performance-intensive context. The results of our study indicate that both display and interaction fidelity significantly affect strategy and performance, as well as subjective judgments of presence, engagement, and usability. In particular, performance results were strongly in favor of two conditions: low-display, low-interaction fidelity (representative of traditional FPS games) and high-display, high-interaction fidelity (similar to the real world).
Computer Methods and Programs in Biomedicine | 2014
Kwanguk Kim; M. Zachary Rosenthal; David J. Zielinski; Rachael Brady
The goal of the current study was to investigate the effects of different virtual environment (VE) technologies (i.e., desktop, head mounted display, or fully immersive platforms) on emotional arousal and task performance. Fifty-three participants were recruited from a college population. Reactivity to stressful VEs was examined in three VE systems from desktop to high-end fully immersive systems. The experiment was a 3 (desktop system, head mounted display, and six wall system)×2 (high- and low-stressful VE) within subject design, with self-reported emotional arousal and valence, skin conductance, task performance, presence, and simulator sickness examined as dependent variables. Replicating previous studies, the fully immersive system induced the highest sense of presence and the head mounted display system elicited the highest amount of simulator sickness. Extending previous studies, the results demonstrated that VE platforms were associated with different patterns in emotional responses and task performance. Our findings suggest that different VE systems may be appropriate for different scientific purposes when studying stress reactivity using emotionally evocative tasks.
Frontiers in Behavioral Neuroscience | 2011
Nicole C. Huff; Jose Hernandez; Matthew E. Fecteau; David J. Zielinski; Rachael Brady; Kevin S. LaBar
The extinction of conditioned fear is known to be context-specific and is often considered more contextually bound than the fear memory itself (Bouton, 2004). Yet, recent findings in rodents have challenged the notion that contextual fear retention is initially generalized. The context-specificity of a cued fear memory to the learning context has not been addressed in the human literature largely due to limitations in methodology. Here we adapt a novel technology to test the context-specificity of cued fear conditioning using full immersion 3-D virtual reality (VR). During acquisition training, healthy participants navigated through virtual environments containing dynamic snake and spider conditioned stimuli (CSs), one of which was paired with electrical wrist stimulation. During a 24-h delayed retention test, one group returned to the same context as acquisition training whereas another group experienced the CSs in a novel context. Unconditioned stimulus expectancy ratings were assayed on-line during fear acquisition as an index of contingency awareness. Skin conductance responses time-locked to CS onset were the dependent measure of cued fear, and skin conductance levels during the interstimulus interval were an index of context fear. Findings indicate that early in acquisition training, participants express contingency awareness as well as differential contextual fear, whereas differential cued fear emerged later in acquisition. During the retention test, differential cued fear retention was enhanced in the group who returned to the same context as acquisition training relative to the context shift group. The results extend recent rodent work to illustrate differences in cued and context fear acquisition and the contextual specificity of recent fear memories. Findings support the use of full immersion VR as a novel tool in cognitive neuroscience to bridge rodent models of contextual phenomena underlying human clinical disorders.
NeuroImage | 2015
Fredrik Åhs; Philip A. Kragel; David J. Zielinski; Rachael Brady; Kevin S. LaBar
The maintenance of anxiety disorders is thought to depend, in part, on deficits in extinction memory, possibly due to reduced contextual control of extinction that leads to fear renewal. Animal studies suggest that the neural circuitry responsible fear renewal includes the hippocampus, amygdala, and dorsomedial (dmPFC) and ventromedial (vmPFC) prefrontal cortex. However, the neural mechanisms of context-dependent fear renewal in humans remain poorly understood. We used functional magnetic resonance imaging (fMRI), combined with psychophysiology and immersive virtual reality, to elucidate how the hippocampus, amygdala, and dmPFC and vmPFC interact to drive the context-dependent renewal of extinguished fear. Healthy human participants encountered dynamic fear-relevant conditioned stimuli (CSs) while navigating through 3-D virtual reality environments in the MRI scanner. Conditioning and extinction were performed in two different virtual contexts. Twenty-four hours later, participants were exposed to the CSs without reinforcement while navigating through both contexts in the MRI scanner. Participants showed enhanced skin conductance responses (SCRs) to the previously-reinforced CS+ in the acquisition context on Day 2, consistent with fear renewal, and sustained responses in the dmPFC. In contrast, participants showed low SCRs to the CSs in the extinction context on Day 2, consistent with extinction recall, and enhanced vmPFC activation to the non-reinforced CS-. Structural equation modeling revealed that the dmPFC fully mediated the effect of the hippocampus on right amygdala activity during fear renewal, whereas the vmPFC partially mediated the effect of the hippocampus on right amygdala activity during extinction recall. These results indicate dissociable contextual influences of the hippocampus on prefrontal pathways, which, in turn, determine the level of reactivation of fear associations.
international semantic web conference | 2008
Harry Halpin; David J. Zielinski; Rachael Brady; Glenda Kelly
We present Redgraph, the first generic virtual reality visualization program for Semantic Web data. Redgraph is capable of handling large data-sets, as we demonstrate on social network data from the U.S. Patent Trade Office. We develop a Semantic Web vocabulary of virtual reality terms compatible with GraphXML to map graph visualization into the Semantic Web itself. Our approach to visualizing Semantic Web data takes advantage of user-interaction in an immersive environment to bypass a number of difficult issues in 3-dimensional graph visualization layout by relying on users themselves to interactively extrude the nodes and links of a 2-dimensional graph into the third dimension. When users touch nodes in the virtual reality environment, they retrieve data formatted according to the datas schema or ontology. We applied Redgraph to social network data constructed from patents, inventors, and institutions from the United States Patent and Trademark Office in order to explore networks of innovation in computing. Using this data-set, results of a user study comparing extrusion (3-D) vs. no-extrusion (2-D) are presented. The study showed the use of a 3-D interface by subjects led to significant improvement on answering of fine-grained questions about the data-set, but no significant difference was found for broad questions about the overall structure of the data. Furthermore, inference can be used to improve the visualization, as demonstrated with a data-set of biotechnology patents and researchers.
Tellus B | 2007
Gil Bohrer; Michael Wolosin; Rachael Brady; Roni Avissar
The structure of tree canopies affects turbulence in the atmospheric boundary layer, and light attenuation, reflection and emission from forested areas. Through these effects, canopy structure interacts with fluxes of heat, water, CO2, and volatile organic compounds, and affects patterns of soil moisture and ecosystem dynamics. The effects of canopy structure on the atmosphere are hard to measure and can be studied efficiently with large-eddy simulations. Remote sensing images that can be interpreted for biophysical properties are prone to errors due to effects of canopy structure, such as shading. However, the detailed 3-D canopy structure throughout a large spatial domain (up to several km2) is rarely available. We introduce a new method, namely the virtual canopy generator (V-CaGe), to construct finely detailed, 3-D, virtual forest canopies for use in remote sensing, and atmospheric and other environmental models. These virtual canopies are based on commonly observed mean and variance of biophysical forest properties, and a map (or a remotely-sensed image) of leaf area, or canopy heights, of a canopy subdomain. The canopies are constructed by inverse 2-D Fouriertransform of the observed spatial autocorrelation function and a random phase. The resulting field is expanded to 3-D by using empirical allometric profiles. We demonstrate that the V-CaGe can generate realistic simulation domains.
ieee virtual reality conference | 2011
David J. Zielinski; Ryan P. McMahan; Rachael Brady
When viewed from below, a users feet cast shadows onto the floor screen of an under-floor projection system, such as a six-sided CAVE. Tracking those shadows with a camera provides enough information for calculating a users ground-plane location, foot orientation, and footstep events. We present Shadow Walking, an unencumbered locomotion technique that uses shadow tracking to sense a users walking direction and step speed. Shadow Walking affords virtual locomotion by detecting if a user is walking in place. In addition, Shadow Walking supports a sidestep gesture, similar to the iPhones pinch gesture. In this paper, we describe how we implemented Shadow Walking and present a preliminary assessment of our new locomotion technique. We have found Shadow Walking provides advantages of being unencumbered, inexpensive, and easy to implement compared to other walking-in-place approaches. It also has potential for extended gestures and multi-user locomotion.
Source Code for Biology and Medicine | 2009
Jeremy N. Block; David J. Zielinski; Vincent B. Chen; Ian W. Davis; E Claire Vinson; Rachael Brady; Jane S. Richardson; David C. Richardson
BackgroundIn molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case.MethodsThe Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE.ResultsIn addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs).ConclusionThe promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis.
international symposium on visual computing | 2008
Gil Bohrer; Marcos Longo; David J. Zielinski; Rachael Brady
Scientific research has become increasingly interdisciplinary, and clear communication is fundamental when bringing together specialists from different areas of knowledge. This work aims at discussing the role of fully immersive virtual reality experience to facilitate interdisciplinary communication by utilising the Duke Immersive Virtual Environment (DiVE), a CAVE-like system, to explore the complex and high-resolution results from the Regional Atmospheric Modelling System-based Forest Large-Eddy Simulation (RAFLES) model coupled with the Ecosystem Demography model (ED2). VR exploration provided an intuitive environment to simultaneously analyse canopy structure and atmospheric turbulence and fluxes, attracting and engaging specialists from various backgrounds during the early stages of the data analysis. The VR environment facilitated exploration of large multivariate data with complex and not fully understood non-linear interactions in an intuitive and interactive way. This proved fundamental to formulate hypotheses about tree-scale atmosphere-canopy-structure interactions and define the most meaningful ways to display the results.
visual analytics science and technology | 2010
Eric Monson; Guangliang Chen; Rachael Brady; Mauro Maggioni
Geometric Wavelets is a new multi-scale data representation technique which is useful for a variety of applications such as data compression, interpretation and anomaly detection. We have developed an interactive visualization with multiple linked views to help users quickly explore data sets and understand this novel construction. Currently the interface is being used by applied mathematicians to view results and gain new insights, speeding methods development.