Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruth West is active.

Publication


Featured researches published by Ruth West.


Journal of Biomedical Discovery and Collaboration | 2006

Collaborative development of the Arrowsmith two node search interface designed for laboratory investigators.

Neil R. Smalheiser; Vetle I. Torvik; Amanda Bischoff-Grethe; Lauren B. Burhans; Michael Gabriel; Ramin Homayouni; Alireza Kashef; Maryann E. Martone; Guy A. Perkins; Diana L. Price; Andrew Talk; Ruth West

Arrowsmith is a unique computer-assisted strategy designed to assist investigators in detecting biologically-relevant connections between two disparate sets of articles in Medline. This paper describes how an inter-institutional consortium of neuroscientists used the UIC Arrowsmith web interface http://arrowsmith.psych.uic.edu in their daily work and guided the development, refinement and expansion of the system into a suite of tools intended for use by the wider scientific community.


Future Generation Computer Systems | 2006

Real-time multi-scale brain data acquisition, assembly, and analysis using an end-to-end OptIPuter

Rajvikram Singh; Nicholas Schwarz; Nut Taesombut; David Lee; Byungil Jeong; Luc Renambot; Abel W. Lin; Ruth West; Hiromu Otsuka; Sei Naito; Steven T. Peltier; Maryann E. Martone; Kazunori Nozaki; Jason Leigh; Mark H. Ellisman

At iGrid 2005 we demonstrated the transparent operation of a biology experiment on a test-bed of globally distributed visualization, storage, computational, and network resources. These resources were bundled into a unified platform by utilizing dynamic lambda allocation, high bandwidth protocols for optical networks, a Distributed Virtual Computer (DVC) [N. Taesombut, A. Chien, Distributed Virtual Computer (DVC): Simplifying the development of high performance grid applications, in: Proceedings of the Workshop on Grids and Advanced Networks, GAN 04, Chicago, IL, April 2004 (held in conjunction with the IEEE Cluster Computing and the Grid (CCGrid2004) Conference)], and applications running over the Scalable Adaptive Graphics Environment (SAGE) [L. Renambot, A. Rao, R. Singh, B. Jeong, N. Krishnaprasad, V. Vishwanath, V. Chandrasekhar, N. Schwarz, A. Spale, C. Zhang, G. Goldman, J. Leigh, A. Johnson, SAGE: The Scalable Adaptive Graphics Environment, in: Proceedings of WACE 2004, 23-24 September 2004, Nice, France, 2004]. Using these layered technologies we ran a multi-scale correlated microscopy experiment [M.E. Maryann, T.J. Deerinck, N. Yamada, E. Bushong, H. Ellisman Mark, Correlated 3D light and electron microscopy: Use of high voltage electron microscopy and electron tomography for imaging large biological structures, Journal of Histotechnology 23 (3) (2000) 261-270], where biologists imaged samples with scales ranging from 20X to 5000X in progressively increasing magnification. This allows the scientists to zoom in from entire complex systems such as a rat cerebellum to individual spiny dendrites. The images used spanned multiple modalities of imaging and specimen preparation, thus providing context at every level and allowing the scientists to better understand the biological structures. This demonstration attempts to define an infrastructure based on OptIPuter components which would aid the development and design of collaborative scientific experiments, applications and test-beds and allow the biologists to effectively use the high resolution real estate of tiled displays.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

Where do experts look while doing 3D image segmentation

Anahita Sanandaji; Cindy Grimm; Ruth West; Max Parola

3D image segmentation is a fundamental process in many scientific and medical applications. Automatic algorithms do exist, but there are many use cases where these algorithms fail. The gold standard is still manual segmentation or review. Unfortunately, even for an expert this is laborious, time consuming, and prone to errors. Existing 3D segmentation tools do not currently take into account human mental models and low-level perception tasks. Our goal is to improve the quality and efficiency of manual segmentation and review by analyzing how experts perform segmentation. As a preliminary step we conducted a field study with 8 segmentation experts, recording video and eye tracking data. We developed a novel coding scheme to analyze this data and verified that it successfully covers and quantifies the low-level actions, tasks and behaviors of experts during 3D image segmentation.


Proceedings of SPIE | 2013

Collaborative imaging of urban forest dynamics: augmenting re-photography to visualize changes over time

Ruth West; Abby Halley; Jarlath O’Neil-Dunne; Daniel Gordon; Robert Pless

The ecological sciences face the challenge of making measurements to detect subtle changes sometimes over large areas across varied temporal scales. The challenge is thus to measure patterns of slow, subtle change occurring along multiple spatial and temporal scales, and then to visualize those changes in a way that makes important variations visceral to the observer. Imaging plays an important role in ecological measurement but existing techniques often rely on approaches that are limited with respect to their spatial resolution, view angle, and/or temporal resolution. Furthermore, integrating imaging acquired through different modalities is often difficult, if not impossible. This research envisions a community-based and participatory approach based around augmented rephotography of ecosystems. We show a case study for the purpose of monitoring the urban tree canopy. The goal is to explore, for a set of urban locations, the integration of ground level rephotography with available LiDAR data, and to create a dynamic view of the urban forest, and its changes across various spatial and temporal scales. This case study gives the opportunity to explore various augments to improve the ground level image capture process, protocols to support 3D inference from the contributed photography, and both in-situ and web based visualizations of the temporal change over time.


visual information communication and interaction  | 2016

Eliciting Tacit Expertise in 3D Volume Segmentation

Ruth West; Meghan Kajihara; Max Parola; Kathryn Hays; Luke Hillard; Anne R. Carlew; Jeremey Deutsch; Brandon Lane; Michelle Holloway; Brendan John; Anahita Sanandaji; Cindy Grimm

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.


ieee virtual reality conference | 2009

Sensate abstraction: hybrid strategies for multi-dimensional data in expressive virtual reality contexts

Ruth West; Joachim Gossmann; Todd Margolis; Jürgen P. Schulze; J. P. Lewis; Ben Hackbarth; Iman Mostafavi

ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.


Leonardo | 2005

Both and Neither: in silico v1.0, Ecce Homology

Ruth West; Jeff Burke; Cheryl A. Kerfeld; Eitan Mendelowitz; Thomas Holton; J. P. Lewis; Ethan Drucker; Weihong Yan

Ecce Homology, a physically interactive new-media work, visualizes genetic data as calligraphic forms. A novel computer-vision user interface allows multiple participants, through their movement in the installation space, to select genes from the human genome for visualizing the Basic Local Alignment Search Tool (BLAST), a primary algorithm in comparative genomics. Ecce Homology was successfully installed in the UCLA Fowler Museum, 6 November 20034 January 2004.


international symposium on visual computing | 2015

Guided Structure-Aligned Segmentation of Volumetric Data

Michelle Holloway; Anahita Sanandaji; Deniece Yates; Amali Krigger; Ross T. Sowell; Ruth West; Cindy Grimm

Segmentation of volumetric images is considered a time and resource intensive bottleneck in scientific endeavors. Automatic methods are becoming more reliable, but many data sets still require manual intervention. Key difficulties include navigating the 3D image, determining where to place marks, and maintaining consistency between marks and segmentations. Clinical practice often requires segmenting many different instances of a specific structure. In this research we leverage the similarity of a repeated segmentation task to address these difficulties and reduce the cognitive load for segmenting on non-traditional planes. We propose the idea of guided contouring protocols that provide guidance in the form of an automatic navigation path to arbitrary cross sections, example marks from similar data sets, and text instructions. We present a user study that shows the usability of this system with non-expert users in terms of segmentation accuracy, consistency, and efficiency.


electronic imaging | 2015

Embodied information behavior, mixed reality and big data

Ruth West; Max Parola; Amelia R. Jaycen; Christopher Lueg

A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.


international conference on computer graphics and interactive techniques | 2013

Collaborative rephotography

Ruth West; Abby Halley; Daniel Gordon; Jarlath O'Neil-Dunne; Robert Pless

Rephotography is the process of capturing the same scene at a different time, in order to capture changes. Previous work at SIGGRAPH [BAE2010] demonstrated the ability for smart-phone apps to guide a user to the correct viewpoint, here we promote the use of such tools distributed widely over space and time, by enabling collaborative projects that allow multiple users to re-photograph multiple sites over time. These sites may be architectural, social, urban scenes or ecological. Current projects utilizing our mobile tools range from nation-scale rephotography of scenic overlooks, to monitoring of urban street trees in NYC by local conservancy group volunteers. Rephotography directly connects pictures at one time to pictures at another time. It also connects a photographer at one time to a photographer at another time, by providing a mechanism to collaboratively record the story of how our world changes.

Collaboration


Dive into the Ruth West's collaboration.

Top Co-Authors

Avatar

Cindy Grimm

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Parola

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Todd Margolis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Burke

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iman Mostafavi

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge