Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlos D. Correa is active.

Publication


Featured researches published by Carlos D. Correa.


IEEE Transactions on Visualization and Computer Graphics | 2008

Size-based Transfer Functions: A New Volume Exploration Technique

Carlos D. Correa; Kwan-Liu Ma

The visualization of complex 3D images remains a challenge, a fact that is magnified by the difficulty to classify or segment volume data. In this paper, we introduce size-based transfer functions, which map the local scale of features to color and opacity. Features in a data set with similar or identical scalar values can be classified based on their relative size. We achieve this with the use of scale fields, which are 3D fields that represent the relative size of the local feature at each voxel. We present a mechanism for obtaining these scale fields at interactive rates, through a continuous scale-space analysis and a set of detection filters. Through a number of examples, we show that size-based transfer functions can improve classification and enhance volume rendering techniques, such as maximum intensity projection. The ability to classify objects based on local size at interactive rates proves to be a powerful method for complex data exploration.


IEEE Transactions on Visualization and Computer Graphics | 2011

Visibility Histograms and Visibility-Driven Transfer Functions

Carlos D. Correa; Kwan-Liu Ma

Direct volume rendering is an important tool for visualizing complex data sets. However, in the process of generating 2D images from 3D data, information is lost in the form of attenuation and occlusion. The lack of a feedback mechanism to quantify the loss of information in the rendering process makes the design of good transfer functions a difficult and time consuming task. In this paper, we present the general notion of visibility histograms, which are multidimensional graphical representations of the distribution of visibility in a volume-rendered image. In this paper, we explore the 1D and 2D transfer functions that result from intensity values and gradient magnitude. With the help of these histograms, users can manage a complex set of transfer function parameters that maximize the visibility of the intervals of interest and provide high quality images of volume data. We present a semiautomated method for generating transfer functions, which progressively explores the transfer function space toward the goal of maximizing visibility of important structures. Our methodology can be easily deployed in most visualization systems and can be used together with traditional 1D and 2D opacity transfer functions based on scalar values, as well as with other more sophisticated rendering algorithms.


IEEE Transactions on Visualization and Computer Graphics | 2006

Feature Aligned Volume Manipulation for Illustration and Visualization

Carlos D. Correa; Deborah Silver; Min Chen

In this paper we describe a GPU-based technique for creating illustrative visualization through interactive manipulation of volumetric models. It is partly inspired by medical illustrations, where it is common to depict cuts and deformation in order to provide a better understanding of anatomical and biological structures or surgical processes, and partly motivated by the need for a real-time solution that supports the specification and visualization of such illustrative manipulation. We propose two new feature aligned techniques, namely surface alignment and segment alignment, and compare them with the axis-aligned techniques which were reported in previous work on volume manipulation. We also present a mechanism for defining features using texture volumes, and methods for computing correct normals for the deformed volume in respect to different alignments. We describe a GPU-based implementation to achieve real-time performance of the techniques and a collection of manipulation operators including peelers, retractors, pliers and dilators which are adaptations of the metaphors and tools used in surgical procedures and medical illustrations. Our approach is directly applicable in medical and biological illustration, and we demonstrate how it works as an interactive tool for focus+context visualization, as well as a generic technique for volume graphics


IEEE Transactions on Visualization and Computer Graphics | 2009

The Occlusion Spectrum for Volume Classification and Visualization

Carlos D. Correa; Kwan-Liu Ma

Despite the ever-growing improvements on graphics processing units and computational power, classifying 3D volume data remains a challenge.In this paper, we present a new method for classifying volume data based on the ambient occlusion of voxels. This information stems from the observation that most volumes of a certain type, e.g., CT, MRI or flow simulation, contain occlusion patterns that reveal the spatial structure of their materials or features. Furthermore, these patterns appear to emerge consistently for different data sets of the same type. We call this collection of patterns the occlusion spectrum of a dataset. We show that using this occlusion spectrum leads to better two-dimensional transfer functions that can help classify complex data sets in terms of the spatial relationships among features. In general, the ambient occlusion of a voxel can be interpreted as a weighted average of the intensities in a spherical neighborhood around the voxel. Different weighting schemes determine the ability to separate structures of interest in the occlusion spectrum. We present a general methodology for finding such a weighting. We show results of our approach in 3D imaging for different applications, including brain and breast tumor detection and the visualization of turbulent flow.


visual analytics science and technology | 2009

A framework for uncertainty-aware visual analytics

Carlos D. Correa; Yu-Hsuan Chan; Kwan-Liu Ma

Visual analytics has become an important tool for gaining insight on large and complex collections of data. Numerous statistical tools and data transformations, such as projections, binning and clustering, have been coupled with visualization to help analysts understand data better and faster. However, data is inherently uncertain, due to error, noise or unreliable sources. When making decisions based on uncertain data, it is important to quantify and present to the analyst both the aggregated uncertainty of the results and the impact of the sources of that uncertainty. In this paper, we present a new framework to support uncertainty in the visual analytics process, through statistic methods such as uncertainty modeling, propagation and aggregation. We show that data transformations, such as regression, principal component analysis and k-means clustering, can be adapted to account for uncertainty. This framework leads to better visualizations that improve the decision-making process and help analysts gain insight on the analytic process itself.


international conference on computer graphics and interactive techniques | 2010

Dynamic video narratives

Carlos D. Correa; Kwan-Liu Ma

This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.


visual analytics science and technology | 2009

Proximity-based visualization of movement trace data

Tarik Crnovrsanin; Chris Muelder; Carlos D. Correa; Kwan-Liu Ma

The increasing availability of motion sensors and video cameras in living spaces has made possible the analysis of motion patterns and collective behavior in a number of situations. The visualization of this movement data, however, remains a challenge. Although maintaining the actual layout of the data space is often desirable, direct visualization of movement traces becomes cluttered and confusing as the spatial distribution of traces may be disparate and uneven. We present proximity-based visualization as a novel approach to the visualization of movement traces in an abstract space rather than the given spatial layout. This abstract space is obtained by considering proximity data, which is computed as the distance between entities and some number of important locations. These important locations can range from a single fixed point, to a moving point, several points, or even the proximities between the entities themselves. This creates a continuum of proximity spaces, ranging from the fixed absolute reference frame to completely relative reference frames. By combining these abstracted views with the concrete spatial views, we provide a way to mentally map the abstract spaces back to the real space. We demonstrate the effectiveness of this approach, and its applicability to visual analytics problems such as hazard prevention, migration patterns, and behavioral studies.


IEEE Transactions on Visualization and Computer Graphics | 2012

Visual Reasoning about Social Networks Using Centrality Sensitivity

Carlos D. Correa; Tarik Crnovrsanin; Kwan-Liu Ma

In this paper, we study the sensitivity of centrality metrics as a key metric of social networks to support visual reasoning. As centrality represents the prestige or importance of a node in a network, its sensitivity represents the importance of the relationship between this and all other nodes in the network. We have derived an analytical solution that extracts the sensitivity as the derivative of centrality with respect to degree for two centrality metrics based on feedback and random walks. We show that these sensitivities are good indicators of the distribution of centrality in the network, and how changes are expected to be propagated if we introduce changes to the network. These metrics also help us simplify a complex network in a way that retains the main structural properties and that results in trustworthy, readable diagrams. Sensitivity is also a key concept for uncertainty analysis of social networks, and we show how our approach may help analysts gain insight on the robustness of key network metrics. Through a number of examples, we illustrate the need for measuring sensitivity, and the impact it has on the visualization of and interaction with social and other scale-free networks.


ieee pacific visualization symposium | 2009

Visibility-driven transfer functions

Carlos D. Correa; Kwan-Liu Ma

Direct volume rendering is an important tool for visualizing complex data sets. However, in the process of generating 2D images from 3D data, information is lost in the form of attenuation and occlusion. The lack of a feedback mechanism to quantify the loss of information in the rendering process makes the design of good transfer functions a difficult and time consuming task. In this paper, we present the notion of visibility-driven transfer functions, which are transfer functions that provide a good visibility of features of interest from a given viewpoint. To achieve this, we introduce visibility histograms. These histograms provide graphical cues that intuitively inform the user about the contribution of particular scalar values to the final image. By carefully manipulating the parameters of the opacity transfer function, users can now maximize the visibility of the intervals of interest in a volume data set. Based on this observation, we also propose a semi-automated method for generating transfer functions, which progressively improves a transfer function defined by the user, according to a certain importance metric. Now the user does not have to deal with the tedious task of making small changes to the transfer function parameters, but now he/she can rely on the system to perform these searches automatically. Our methodology can be easily deployed in most visualization systems and can be used together with traditional 1D opacity transfer functions based on scalar values, as well as with multidimensional transfer functions and other more sophisticated rendering algorithms.


PLOS ONE | 2010

In Vivo Mapping of Vascular Inflammation Using Multimodal Imaging

Benjamin R. Jarrett; Carlos D. Correa; Kwan-Liu Ma; Angelique Y. Louie

Background Plaque vulnerability to rupture has emerged as a critical correlate to risk of adverse coronary events but there is as yet no clinical method to assess plaque stability in vivo. In the search to identify biomarkers of vulnerable plaques an association has been found between macrophages and plaque stability—the density and pattern of macrophage localization in lesions is indicative of probability to rupture. In very unstable plaques, macrophages are found in high densities and concentrated in the plaque shoulders. Therefore, the ability to map macrophages in plaques could allow noninvasive assessment of plaque stability. We use a multimodality imaging approach to noninvasively map the distribution of macrophages in vivo. The use of multiple modalities allows us to combine the complementary strengths of each modality to better visualize features of interest. Our combined use of Positron Emission Tomography and Magnetic Resonance Imaging (PET/MRI) allows high sensitivity PET screening to identify putative lesions in a whole body view, and high resolution MRI for detailed mapping of biomarker expression in the lesions. Methodology/Principal Findings Macromolecular and nanoparticle contrast agents targeted to macrophages were developed and tested in three different mouse and rat models of atherosclerosis in which inflamed vascular plaques form spontaneously and/or are induced by injury. For multimodal detection, the probes were designed to contain gadolinium (T1 MRI) or iron oxide (T2 MRI), and Cu-64 (PET). PET imaging was utilized to identify regions of macrophage accumulation; these regions were further probed by MRI to visualize macrophage distribution at high resolution. In both PET and MR images the probes enhanced contrast at sites of vascular inflammation, but not in normal vessel walls. MRI was able to identify discrete sites of inflammation that were blurred together at the low resolution of PET. Macrophage content in the lesions was confirmed by histology. Conclusions/Significance The multimodal imaging approach allowed high-sensitivity and high-resolution mapping of biomarker distribution and may lead to a clinical method to predict plaque probability to rupture.

Collaboration


Dive into the Carlos D. Correa's collaboration.

Top Co-Authors

Avatar

Kwan-Liu Ma

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Chen

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Anna Tikhonova

University of California

View shared research outputs
Top Co-Authors

Avatar

Peter Lindstrom

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu-Hsuan Chan

University of California

View shared research outputs
Top Co-Authors

Avatar

Peer-Timo Bremer

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge