Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Kincaid is active.

Publication


Featured researches published by Robert Kincaid.


IEEE Transactions on Visualization and Computer Graphics | 2008

Cerebral: Visualizing Multiple Experimental Conditions on a Graph with Biological Context

Aaron Barsky; Tamara Munzner; Jennifer L. Gardy; Robert Kincaid

Systems biologists use interaction graphs to model the behavior of biological systems at the molecular level. In an iterative process, such biologists observe the reactions of living cells under various experimental conditions, view the results in the context of the interaction graph, and then propose changes to the graph model. These graphs serve as a form of dynamic knowledge representation of the biological system being studied and evolve as new insight is gained from the experimental data. While numerous graph layout and drawing packages are available, these tools did not fully meet the needs of our immunologist collaborators. In this paper, we describe the data information display needs of these immunologists and translate them into design decisions. These decisions led us to create Cerebral, a system that uses a biologically guided graph layout and incorporates experimental data directly into the graph display. Small multiple views of different experimental conditions and a data-driven parallel coordinates view enable correlations between experimental conditions to be analyzed at the same time that the data is viewed in the graph context. This combination of coordinated views allows the biologist to view the data from many different perspectives simultaneously. To illustrate the typical analysis tasks performed, we analyze two datasets using Cerebral. Based on feedback from our collaborators we conclude that Cerebral is a valuable tool for analyzing experimental data in the context of an interaction graph model.


Circulation Research | 2006

Differences in Vascular Bed Disease Susceptibility Reflect Differences in Gene Expression Response to Atherogenic Stimuli

David Deng; Anya Tsalenko; Aditya Vailaya; Amir Ben-Dor; Ramendra K. Kundu; Ivette Estay; Raymond Tabibiazar; Robert Kincaid; Zohar Yakhini; Laurakay Bruhn; Thomas Quertermous

Atherosclerosis occurs predominantly in arteries and only rarely in veins. The goal of this study was to test whether differences in the molecular responses of venous and arterial endothelial cells (ECs) to atherosclerotic stimuli might contribute to vascular bed differences in susceptibility to atherosclerosis. We compared gene expression profiles of primary cultured ECs from human saphenous vein (SVEC) and coronary artery (CAEC) exposed to atherogenic stimuli. In addition to identifying differentially expressed genes, we applied statistical analysis of gene ontology and pathway annotation terms to identify signaling differences related to cell type and stimulus. Differential gene expression of untreated venous and arterial endothelial cells yielded 285 genes more highly expressed in untreated SVEC (P<0.005 and fold change >1.5). These genes represented various atherosclerosis-related pathways including responses to proliferation, oxidoreductase activity, antiinflammatory responses, cell growth, and hemostasis functions. Moreover, stimulation with oxidized LDL induced dramatically greater gene expression responses in CAEC compared with SVEC, relating to adhesion, proliferation, and apoptosis pathways. In contrast, interleukin 1&bgr; and tumor necrosis factor &agr; activated similar gene expression responses in both CAEC and SVEC. The differences in functional response and gene expression were further validated by an in vitro proliferation assay and in vivo immunostaining of &agr;&bgr;-crystallin protein. Our results strongly suggest that different inherent gene expression programs in arterial versus venous endothelial cells contribute to differences in atherosclerotic disease susceptibility.


advanced visual interfaces | 2006

Line graph explorer: scalable display of line graphs using Focus+Context

Robert Kincaid; Heidi Lam

Scientific measurements are often depicted as line graphs. State-of-the-art high throughput systems in life sciences, telemetry and electronics measurement rapidly generate hundreds to thousands of such graphs. Despite the increasing volume and ubiquity of such data, few software systems provide efficient interactive management, navigation and exploratory analysis of large line graph collections. To address these issues, we have developed Line Graph Explorer (LGE). LGE is a novel and visually scalable line graph management system that supports facile navigation and interactive visual analysis. LGE provides a compact overview of the entire collection by encoding the y-dimension of individual line graphs with color instead of space, thus enabling the analyst to see major common features and alignments of the data. Using Focus+Context techniques, LGE provides interactions for viewing selected compressed graphs in detail as standard line graphs without losing a sense of the general pattern and major features of the collection. To further enhance visualization and pattern discovery, LGE provides sorting and clustering of line graphs based on similarity of selected graph features. Sequential sorting by associated line graph metadata is also supported. We illustrate the features and use of LGE with examples from meteorology and biology.


IEEE Transactions on Visualization and Computer Graphics | 2007

Overview Use in Multiple Visual Information Resolution Interfaces

Heidi Lam; Tamara Munzner; Robert Kincaid

In interfaces that provide multiple visual information resolutions (VIR), low-VIR overviews typically sacrifice visual details for display capacity, with the assumption that users can select regions of interest to examine at higher VI Rs. Designers can create low VIRs based on multi-level structure inherent in the data, but have little guidance with single-level data. To better guide design tradeoff between display capacity and visual target perceivability, we looked at overview use in two multiple-VIR interfaces with high-VIR displays either embedded within, or separate from, the overviews. We studied two visual requirements for effective overview and found that participants would reliably use the low-VIR overviews only when the visual targets were simple and had small visual spans. Otherwise, at least 20% chose to use the high-VIR view exclusively. Surprisingly, neither of the multiple-VIR interfaces provided performance benefits when compared to using the high-VIR view alone. However, we did observe benefits in providing side-by-side comparisons for target matching. We conjecture that the high cognitive load of multiple-VIR interface interactions, whether real or perceived, is a more considerable barrier to their effective use than was previously considered.


IEEE Transactions on Visualization and Computer Graphics | 2010

SignalLens: Focus+Context Applied to Electronic Time Series

Robert Kincaid

Electronic test and measurement systems are becoming increasingly sophisticated in order to match the increased complexity and ultra-high speed of the devices under test. A key feature in many such instruments is a vastly increased capacity for storage of digital signals. Storage of 109 time points or more is now possible. At the same time, the typical screens on such measurement devices are relatively small. Therefore, these instruments can only render an extremely small fraction of the complete signal at any time. SignalLens uses a Focus+Context approach to provide a means of navigating to and inspecting low-level signal details in the context of the entire signal trace. This approach provides a compact visualization suitable for embedding into the small displays typically provided by electronic measurement instruments. We further augment this display with computed tracks which display time-aligned computed properties of the signal. By combining and filtering these computed tracks it is possible to easily and quickly find computationally detected features in the data which are often obscured by the visual compression required to render the large data sets on a small screen. Further, these tracks can be viewed in the context of the entire signal trace as well as visible high-level signal features. Several examples using real-world electronic measurement data are presented, which demonstrate typical use cases and the effectiveness of the design.


Clinical Chemistry and Laboratory Medicine | 2008

PROCAM Study: risk prediction for myocardial infarction using microfluidic high-density lipoprotein (HDL) subfractionation is independent of HDL cholesterol.

Odilo Mueller; Elaine Chang; David Deng; Torsten Franz; Debra Jing; Robert Kincaid; Yves Konigshofer; Martin Kratzmeier; Michael McNulty; Hao Qian; Juergen Schneider; Helmut Schulte; Udo Seedorf; Xioadan Tian; Mark Van Cleve; Dorothy Yang; Gerd Assmann

Abstract Background: High-density lipoprotein (HDL) subfractions are among the new emerging risk factors for atherosclerosis. In particular, HDL 2b has been shown to be linked to cardiovascular risk. This study uses a novel microfluidics-based method to establish HDL 2b clinical utility using samples from the Prospective Cardiovascular Muenster (PROCAM) Study. Methods: Method performance was established by measuring accuracy, precision, linearity and inter-site precision. Serum samples from 503 individuals collected in the context of the PROCAM study were analyzed by electrophoresis on a microfluidics system. Of these, 251 were male survivors of myocardial infarction (cases), while 252 individuals were matched healthy controls. HDL cholesterol, HDL 2b concentration and HDL 2b percentage were analyzed. Results: This novel method showed satisfactory assay performance with an inter-site coefficient of variance of <10% for HDL 2b percentage. Parallel patient testing on 52 samples between two sites resulted in a correlation coefficient of r=0.95. Significant differences were observed in the HDL 2b subfraction between cases and controls independent of other risk factors. Including HDL 2b percentage in logistic regression reduced the number of false positives from 64 to 39 and the number of false negative cases from 48 to 45, in the context of this study. Conclusions: The novel method showed satisfactory assay performance in addition to drastically reduced analysis times and improved ease of use as compared to other methods. Clinical utility of HDL 2b was demonstrated supporting the findings of previous studies. Clin Chem Lab Med 2008;46:490–8.


Bioinformatics | 2008

VistaClara: an expression browser plug-in for Cytoscape

Robert Kincaid; Allan Kuchinsky; Michael L. Creech

Summary: VistaClara is a plug-in for Cytoscape which provides a more flexible means to visualize gene and protein expression within a network context. An extended attribute browser is provided in the form of a graphical and interactive permutation matrix that resembles the heat map displays popular in gene-expression analysis. This extended browser permits a variety of display options and interactions not currently available in Cytoscape. Availability: http://chianti.ucsd.edu/cyto_web/plugins/index.php Contact: [email protected]


acm symposium on applied computing | 2004

VistaClara: an interactive visualization for exploratory analysis of DNA microarrays

Robert Kincaid

We have created VistaClara to explre the effectiveness of applying an extended permutation matrix to the task of exploratory data analysis of multi-experiment microarray studies. The permutation matrix is a visualization technique for interactive exploratory analysis of tabular data that permits both row and column rearrangement, and fits well with the tabular forms of data characteristic of gene expression studies. However, this technique has been largely overlooked by current bioinformatics research. Our implementation supports direct incorporation of supplemental data and annotations into the matrix view. This enables visually searching for patterns in gene expression measurements that correlate with other types of relevant data (disease classes, clinical, histological, drug treatments, etc.). The heatmap visualization common in microarray analysis is extended to provide a novel alternative using size as well as color to graphically represent experimental values, thus allowing more effective quantitative comparisons. Methods to sort rows or columns by similarity extend the possible permutation operations, and allow more efficient searching for biologically relevant patterns in very large data sets. Based on overview+detail principles, a dynamic compressed heatmap view of the entire data set provides the user with overall context, including possible correlations not currently visible in the more detailed view. Combined, these techniques make it possible to perform highly interactive ad hoc visual explorations of microarray.


acm symposium on applied computing | 2004

An architecture for biological information extraction and representation

Aditya Vailaya; Peter Bluvas; Robert Kincaid; Allan Kuchinsky; Michael L. Creech; Annette Adler

Technological advances in biomedical research are generating a plethora of heterogeneous data at a high rate. There is a critical need for extraction, integration and management tools for information discovery and synthesis from these heterogeneous data. In this paper, we present a general architecture, called ALFA, for information extraction and representation from diverse biological data. The ALFA architecture consists of: (i) a networked, hierarchical object model for representing information from heterogeneous data sources in a standardized, structured format; and (ii) a suite of integrated, interactive software tools for information extraction and representation from diverse biological data sources. As part of our research efforts to explore this space, we have currently prototyped the ALFA object model and a set of interactive software tools for searching, filtering, and extracting information from scientific text. In particular, we describe BioFerret, a meta-search tool for searching and filtering relevant information from the web, and ALFA Text Viewer, an interactive tool for user-guided extraction, disambiguation, and representation of information from scientific text. We further demonstrate the potential of our tools in integrating the extracted information with experimental data and diagrammatic biological models via the common underlying ALFA representation.


Microarrays : optical technologies and informatics. Conference | 2001

Estimation of the confidence limits of oligonucleotide array-based measurements of differential expression

Glenda C. Delenstarr; Herb Cattell; Chao Chen; Andreas N. Dorsel; Robert Kincaid; Khanh Nguyen; Nicholas M. Sampas; Shad Schidel; Karen W. Shannon; Andrea Tu; Paul K. Wolber

Microarrays can be used to simultaneously measure the differential expression states of many mRNAs in two samples. Such measurements are limited by systematic and random errors. Systematic errors include labeling bias, imperfect feature morphologies, mismatched sample concentrations, and cross-hybridization. Random errors arise from chemical and scanning noise, particularly for low signals. We have used a combination of fluor-exchanged two- color labeling and improved normalization methods to minimize systematic errors from labeling bias, imperfect features, and mismatched sample concentrations. On-array specificity control proves and experimentally proven probe design algorithms were used to correct for cross- hybridization. Random errors were reduced via automated non-uniform feature flagging and an advanced scanner design. We have scored feature significance, using established statistical tests. We have then estimated the intrinsic random measurement error as a function of average probe signal via sample self-comparison experiments (human K-562 cell mRNA). Finally, we have combined all of these tools in the analysis of differential expression measurements between K-562 cells and HeLa cells. The results establish the importance of the elimination of systematic errors and the objective assessment of the effects of random errors in producing reliable estimates of differential expression.

Collaboration


Dive into the Robert Kincaid's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zohar Yakhini

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge