Direct Volume Rendering with Nonparametric Models of Uncertainty
Tushar Athawale, Bo Ma, Elham Sakhaee, Chris R. Johnson, Alireza Entezari
DDirect Volume Rendering with Nonparametric Modelsof Uncertainty
Tushar Athawale, Bo Ma, Elham Sakhaee, Chris R. Johnson,
Fellow, IEEE, and Alireza Entezari,
Senior Member, IEEE
Fig. 1. Nonparametric models of uncertainty improve the quality of reconstruction and classification within an uncertainty-awaredirect volume rendering framework. (a) Improvements in topology of an isosurface in the teardrop dataset (64 × ×
64) withuncertainty due to sampling and quantization. (b) Improvements in classification (i.e., bones in gray and kidneys in red) of the torsodataset with uncertainty due to downsampling.
Abstract —We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertaintyin direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) ofthe ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models ofuncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametricdistributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-raysample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametricstatistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such asuniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametricframework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework toensemble, downsampled, and bivariate versions of scalar field datasets.
Index Terms —Volumes, uncertainty, nonparametric, 2D transfer function
NTRODUCTION
As visualization techniques continue to facilitate the exploration of sci-entific simulations and biomedical datasets, analysis of data uncertain-ties, inherent in all forms of acquisition, modeling, and representation,has emerged as an important research area. Uncertainties present indata, such as those intrinsic to acquisition or modeling (e.g., sampling,quantization), as well as those introduced within data processing (e.g.,filtering/downsampling), adversely impact the reliability of visualiza-tions. To facilitate reliable visualization in the presence of uncertainty,several studies have advocated redesigning visualization algorithmsto treat data as probability distributions to account for various typesof uncertainty [25, 57]. Quantifying the impact of uncertainty in thecomputational process and its propagation throughout the visualizationpipeline poses several mathematical as well as algorithmic challenges. • T. Athawale and C. R. Johnson are with Scientific Computing & Imaging(SCI) Institute, University of Utah. E-mail: { tushar.athawale,crj } @sci.utah.edu,• B. Ma, E. Sakhaee, and A. Entezari are with the Department of CISE at theUniversity of Florida, Gainesville, FL, 32611. E-mail: { bbo, esakhaee, andentezari } @cise.ufl.eduManuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publicationxx xxx. 201x; date of current version xx xxx. 201x. For information onobtaining reprints of this article, please send e-mail to: [email protected] Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx Visualization of uncertain data is an active field of research, includ-ing several advances in innovative ways for the visual depiction ofuncertainty [4]. In contrast, analysis and propagation of uncertainty inthe various stages of the rendering pipeline and quantifying their im-pact on transforming the uncertainty remain challenging tasks. Recentstudies [6, 14, 30] have considered sources of uncertainty within thevisual analytics process and analyzed the contribution of each stage tothe uncertainty associated with the volume data.In our work, we study the propagation of data uncertainty throughthe stages of the direct volume rendering (DVR) pipeline. DVR isa fundamental visualization technique for gaining insights into volu-metric datasets. A transfer function (TF) plays a central role in DVR,as it translates scalar or multifield data to optical properties, such ascolor and opacity. The visual mappings produced by TFs help usersunderstand interesting features or patterns in the dataset. Such a processof feature identification through TF space exploration is referred to as classification [17].The classification task in DVR can be challenging when the data haveuncertainty. The DVR of uncertain data by reusing the TF design for theoriginal function can lead to poor classification results [58]. A simpleworkaround would be to generate a new TF when data have uncertainty;but the design of TFs is known to be a time-consuming and laborioustask, especially in the case of multidimensional TFs. Thus, developingnew techniques that seek to improve the quality of visualizations ofuncertain data while reusing the TF design for the original functionis more desirable. An expressive rendering of uncertain data must a r X i v : . [ c s . G R ] A ug reserve all visible features in the rendering of the ground truth dataand also indicate the uncertainties engendered by the various stages ofthe rendering pipeline.A recently developed statistical framework by Sakhaee et al. [51]introduced a novel approach for DVR that addressed the issue of pre-serving TF designs of the original function for visualizations of uncer-tain data. In their approach, data uncertainty is integrated against 1DTF in the reconstruction stage of the DVR pipeline. Their frameworkopens up new directions for the exploration of uncertain data, becauseit allows for uncertainty propagation and aggregation within the recon-struction and the traditional classification stages. In the frameworkproposed by Sakhaee et al. [51], the data input into the DVR processare considered as a field of random variables described by parametricprobability density functions (PDFs). Liu et al. [31] proposed a DVRframework for visualization of uncertain 3D data when PDFs are mod-eled using Gaussian mixture models (GMM). Their framework usedan expensive Monte Carlo (MC) sampling approach for uncertaintyestimation of interpolated samples of a DVR raycaster.Inspired by contributions on DVR for parametric- [51] and GMM-[31] based uncertainty, we propose a closed-form DVR frameworkfor nonparametric density models. Recently, noise modeling usingnonparametric distributions has been advocated over parametric dis-tributions for taking into account the skewness or multimodality ofdistributions, and hence improving the precision of uncertainty visual-izations [3, 47]. Although the extension to nonparametric distributionsfor DVR has been discussed in previous work [51], such an extension ischallenging, especially from the computational cost point of view, andno recipe for implementation or empirical results has been provided.In our work, we present an efficient quantile interpolation techniquefor DVR of uncertain data, where uncertainty is characterized usingnonparametric distributions. Read [49] first introduced the 1D quantileinterpolation technique for the interpolation of histograms. Hollisterand Pang [22] leveraged the quantile interpolation technique for bilinearinterpolation of nonparametric distributions characterizing uncertainvector fields. We present quantile interpolation for trilinear interpo-lation of nonparametric distributions, and we successfully integrateinterpolated distributions with a DVR framework for the visualizationof uncertain data.Although the-state-of-the-art spline-based technique [51] explores1D classification of uncertain scalar fields for DVR, the classificationof uncertain scalar volumes with multidimensional transfer functions(TFs) and the visualization of multifield data remain challenging tasks.Specifically, the intensity-gradient magnitude (2D) TF has proved valu-able due to its effectiveness in isolating complex boundaries with over-lapping materials [27]. Unlike the previous work that dealt with uncer-tainties in the data and gradient field for data separately, we leverage thesimultaneous estimation of uncertainty in both the scalar and gradientfields. Specifically, we apply a spline-based statistical framework tointensity-gradient magnitude 2D TFs and study its ramifications invisualizing bivariate datasets. We generalize the recently developed spline-based statistical frame-work [51] to nonparametric statistical models and 2D TFs for visualiza-tion of uncertain data. Specifically, we propose the following methodsfor the visualization of uncertain data:• Given an uncertain scalar field, represented as a field of PDFs, weanalytically derive the interpolation of the PDFs of the intensitiesfor any arbitrary sample point along the viewing rays for DVR.Each grid point is modeled using a nonparametric PDF in contrastto a parametric one. A previous study [51] considered nonpara-metric models only as a possible venue for investigation withinthe framework for uncertainty visualization, but did not explorenonparametric models or their potential advantages. In our work,we fill this gap by proposing the use of the quantile interpolationtechnique for nonparametric statistics. Specifically, we present ananalytic formulation of the quantile interpolation technique fortrilinear interpolation of nonparametric PDFs. Our closed-form formulation permits efficient integration of nonparametric statis-tics with a DVR framework. We demonstrate the effectivenessof our proposed nonparametric models through qualitative andquantitative comparisons with mean-field and parametric models.• The quantile interpolation technique presents an example of orderstatistics, where quantiles are ordered using a cumulative densityfunction for a random variable. We, thus, take advantage of orderstatistics to investigate uncertainty in ensemble data by devising atool called the quartile view.• Similar to intensities, we analytically derive a formulation forthe interpolation of PDFs of the gradient magnitudes for samplesalong the viewing rays. The PDFs of intensities and gradientmagnitudes are then integrated against 2D TF (gradient magnitudevs. intensity). Improved classification of uncertain scalar fieldsusing this approach signifies the importance of the simultaneoushandling of uncertainty in data and its gradient field.• We demonstrate an application of our proposed DVR frameworkfor the visualization of ensemble and downsampled data. We alsopresent an application of the reconstruction of PDFs for DVR ofbivariate data.The paper is organized as follows: In Sec. 2, we review the priorwork on uncertainty visualization and multidimensional TFs. InSec. 3.1, we briefly revisit the state-of-the-art theory [51] on the inter-polation of uncertain scalar fields and linear interpolation of histogramsin 1D using the quantile interpolation technique [49]. We then presentan extension of the quantile interpolation technique for trilinear interpo-lation of nonparametric PDFs and its integration into a DVR frameworkin Sec. 3.2 and Sec. 3.3, respectively. In Sec. 3.4, we describe our quar-tile view technique. In Sec. 4.1 and Sec. 4.2, we describe a spline-basedmodel for the interpolation of uncertain gradient fields and proposeintegration of interpolated intensity and gradient magnitude distribu-tions against 2D TFs for visualizations. In Sec. 5, we demonstrateexperimental results for the visualization of uncertain data using recon-structed uncertain scalar fields and uncertain gradient fields. Finally,we conclude our work and discuss possible future work in Sec. 6.
ELATED W ORK
Uncertainty visualization has been recognized as one of the top chal-lenges in the visualization community due to its significance in decision-making [24, 25, 57, 59]. Specifically, uncertainty visualization hasbeen to shown to be important in avoiding misleading interpretationsregarding the underlying data. Whereas classical visualization ap-proaches consider uncertainty associated with the volume data [41, 48],recent works account for aggregated uncertainty due to rendering algo-rithms [6, 14, 51]. Brodlie et al. [4] discuss the impact of propagatinguncertainty in the data to uncertainty in the final image. They definethe propagation problem as determining the PDF of the output entitiesfrom the PDF of the input entities, or discuss that often a MC samplingmethod is required to obtain the PDF of the output. In this paper, wederive PDFs analytically. Correa et al. [6] describe uncertainty prop-agation and aggregation for data transformations, such as regression,principal component analysis, and k-means clustering.Statistical uncertainty analysis of topological features of data, such aslevel sets and critical points, has drawn increasing attention in the studyof data uncertainty. Contour [56], curve [37], and surface [15] box-plots extend the concept of functional-depth ranking [32] for derivingquantiles that represent the spatial variability of ensembles of isocon-tours, arbitrary curves, and 2D images, respectively. The level-crossingprobability method of P¨othkow and Hege [46, 47] and uncertainty-aware marching cubes algorithm proposed by Athawale et al. [1–3]demonstrated the benefits of nonparametric statistical noise modelingover parametric modeling for deriving positional uncertainty in levelsets. Hixels [55] summarized information of a brick of volume as ahistogram for visualizing fuzzy isosurfaces. Suter et al. [53] exploitedhape similarities based on Hausdorff distance for extracting isosur-faces from 3D scalar fields. G¨unther et al. [16] and Favelier et al. [12]devised statistical approaches for characterizing spatial variations incritical points of uncertain data. Otto et al. performed statistical gra-dient field analysis for visualizing topological variations of uncertain2D [39] and 3D [40] vector fields. Recently, He et al. [18] devised anonparametric method, known as surface density estimation (SDE), foranalyzing spatial inconsistency in level sets.In the context of DVR, a considerable body of literature has analyzeduncertainty propagation in the DVR pipeline and the impact of errorson final visualizations. Fout and Ma [14] discussed the contribution ofeach stage of DVR (quantization, reconstruction/filtering, classification,shading, and integration) to the uncertainty associated with volumedata. Kniss et al. [28] presented rendering based on probabilistic classi-fication that allows the user to interactively explore the uncertainty andthe information computed during fuzzy segmentation. Pfaffelmoser et al. [44] assumed Gaussian-distributed data uncertainty for visualiz-ing geometric uncertainty in isosurfaces extracted using DVR. Etian et al. [10, 11] proposed a novel strategy for verifying the correctness ofDVR implementations by analyzing the correlation between discretiza-tion errors caused by sampling along viewing rays and the renderingquality. Kronander et al. [30] evaluated the effects of the propagationof numerical errors, caused by finite precision of data representationand processing, on the volume rendered images. Djurcilov et al. [8]employed features such as speckle, texture, or noise to represent uncer-tainty in the volume rendering process. In medical volume rendering,probabilistic animation has been used to visualize uncertainty [33].The focus of our work is on advocating nonparametric noise mod-eling over parametric noise modeling for preserving the TF design ofthe original function when performing DVR of uncertain scalar fields.For DVR of uncertain data, we adopt a probabilistic view of TFs sinceit allows incorporating uncertainty in both classification and visual pa-rameter mapping. A probabilistic view is proposed by Drebin et al. [9],where the application of TFs involves two steps: (1) map each sample toa set of material probabilities, and assign each material an RGBA color,and (2) compute the color for each sample as a weighted average ofmaterial colors based on material probabilities. Please refer to Sec. 3.3for more details regarding the probabilistic view.
Traditionally, a TF classifies voxels to optical properties, such as col-ors and opacity, according to a 1D function of the scalar values. Thefunction can be designed either manually, which is a tedious task, orautomatically based on attributes of the underlying volume data [45].Since 1D TF classification has limited power in exploring and clas-sifying the embedded features in the data, subsequent studies haveconsidered TFs with multiple dimensions. Multidimensional TFs, asproposed by Kniss et al. [26], have been proven superior to traditional1D transfer functions due to their ability to isolate complex materialswith overlapping intensities. In particular, the gradient magnitude andsecond-order derivatives are commonly used as additional properties toexpand the TF domain [19, 34, 54]. In this work, we demonstrate thebenefits of incorporating gradient magnitude uncertainty, computed an-alytically within the reconstruction stage into a 2D TF, where a 2D TFis characterized by intensity and gradient magnitude. Broadly speaking,we advocate the extension of a methodology involving the integrationof 1D TFs against data uncertainty [51] to 2D TFs for the improvedefficiency and reliability of DVR classifications.
NTERPOLATION OF U NCERTAIN S CALAR F IELD FOR
DVR3.1 Mathematical Model and the State of the Art
We state the mathematical model for our methods and briefly revisit thestate of the art in interpolation of uncertain data when the uncertaintyis modeled as probability distributions. Given 3D discrete uncertainscalar data f ( v i ) , the uncertainty can be modeled at each voxel v i by arandom variable X i . We assume random variables modeling uncertaintyat voxels to be independent. The reconstruction of the random field at an arbitrary position, v , results in a random variable X , which is alinear combination of random variables at positions v i ’s: X = ∑ i w i X i ,with weights w i = ϕ ( v − v i ) , where ϕ : R → R is the basis functionthat determines the weights for the neighboring voxels contributing tothe interpolated sample v . When the data are sampled on a Cartesiangrid, ϕ is commonly chosen as a tensor product-based trilinear B-spline. The goal is to analytically obtain the probability distribution of X at any arbitrary sampling point v . The probability distribution of alinear combination of independent random variables is the convolutionof their individual distributions [20]. Therefore, the PDF of X canbe derived from the convolution of the PDFs of X i ’, i.e., pdf X ( x ) = pdf w X ( x ) ∗ ··· ∗ pdf w K X K ( x ) , where pdf X represents the uncertainty atthe interpolated point, and pdf w i X i denotes the scaled distribution of arandom variable X i : pdf w i X i ( x ) = w i pdf X i ( x / w i ) , for 1 ≤ i ≤ K .A common choice for modeling uncertainty is the Gaussian distri-bution. When PDFs of X i ’s are represented as Gaussian distributions,the PDF of the interpolated point X is also a Gaussian distributionwhose mean and variance are linearly transformed from the meansand variances of X i ’s [13]. Liu et al. [31] modeled PDFs of X i usingGaussian mixture models (GMM). Their framework comprised fittinga GMM to uncertain data f ( v i ) using expectation maximization [7],and involved MC sampling of estimated GMM for the PDF estimationat the interpolation point v . Sakhaee et al. [51] modeled PDFs of X i ’susing compactly supported box splines, e.g., uniform distributions, anddemonstrated the benefits of a box spline framework over a Gaussianassumption for DVR of uncertain data. Even though the box-spline method allows for interpolation of non-parametric distributions in a closed form [51] (described again in thesupplementary material for this paper), it is computationally expensivebecause of its exponential nature. The exponential time complexityof the convolution-based nonparametric approach is a challenge tohandle in the DVR reconstruction stage even when the system hashigh-performance hardware. In our contribution, we take advantage ofthe linear time complexity quantile interpolation technique for interpo-lation of nonparametric distributions. Read [49] proposed a quantileinterpolation method for 1D interpolation of histograms. To summarizetheir approach, a histogram or nonparametric density characterizingthe PDF at each vertex v i is broken into a fixed number of quantiles,and the respective quantiles at each vertex are interpolated to computea probability distribution at the interpolated point v. Thus, the computa-tional complexity of computing the PDF at an interpolated position is linearly proportional to the number of quantiles.The use of quantile interpolation in the context of uncertainty visu-alization was first advocated by Hollister and Pang [21]. The quantileinterpolation method was shown to have two desirable qualities inthe context of uncertainty visualization. First, quantile interpolationpreserves the modality of probability distributions at cell vertices v i .Second, the variance of interpolation data is thresholded from belowby variances of probability distributions at grid vertices. Moreover,quantile interpolation of histograms has been shown to better capturethe shape of the interpolated distribution than the interpolation of para-metric or GMM distributions. The quantile interpolation allows for aclosed-form solution of the PDF at the interpolated position, and henceallows for efficient and accurate PDF computations.Let pdf X ( x ) and pdf X ( x ) be the continuous probability distribu-tions for random variables X and X at 1D cell vertices v and v ,respectively. The probability distributions pdf X ( x ) and pdf X ( x ) canbe estimated from noise samples using histograms or kernel density esti-mation (KDE) [42,50]. Suppose the distributions pdf X ( x ) and pdf X ( x ) are broken into q quantiles each, where qval denotes the quantile value.For example, setting qval = . qval = .
25, and qval = .
125 resultsin median (q=2), quartiles (q=4), and octiles (q=8), respectively.Let Q i , Q i ··· , Q iq denote q quantiles ordered by cumulative den-sity function (CDF) with widths w i , w i ··· , w iq , respectively, for arandom variable X i associated with grid vertex v i when the quantilevalue is set to qval . The quantile representation for the PDF of each ran-dom variable X i is a piecewise constant function in which each quantilepiece) j assumes a constant probability density, i.e., Pr ( Q i j ) = qvalw ij .We, therefore, present pd f X i ( x ) as a tuple { Pr ( Q i ) , ··· , Pr ( Q iq ) } inits quantile representation, in which each entry of a tuple denotes theprobability density over its respective quantile. Let α indicate thespatial distance parameter between 1D cell vertices v and v . Let { Q , ··· Q q } denote the quantile representation for a linearly interpo-lated random variable X = α X + ( − α ) X when the quantile valueis qval . Based on the previous work in [49] and [22], the probabilitydensity for the j’th quantile Pr ( Q j ) of the interpolated random variablecan be computed as follows: Pr ( Q j ) = Pr ( Q j ) Pr ( Q j )( − α ) Pr ( Q j ) + α Pr ( Q j )= qval α w j + ( − α ) w j (1)where α ∈ [ , ] and j ∈ { , , ··· , q } . As can be seen from Eq. 1, thequantile width for the j’th quantile (quantile value = qval ) of a linearlyinterpolated random variable X is essentially a linear interpolationof j’th quantile widths of random variables at grid vertices. Sincethe arithmetic operations in Eq. 1 are applied quantile-wise for eachquantile j ∈ { , , ··· , q } , the computational complexity of the quantileinterpolation is linearly proportional to the number of quantiles q . Asthe number of quantiles q approaches infinity, the PDF at an interpolatedposition converges to a closed-form continuous probability distribution. Recently, Hollister and Pang [22] presented a closed-form solution fora bilinear interpolation of histograms/nonparametric distributions ona 2D grid using the quantile interpolation technique. We extend thederivation for quantile interpolation of histograms or nonparametricdistributions to a 3D case using a similar approach, as proposed in [22].Let α , β , and γ denote the spatial distance parameters for vertices v i ofa 3D grid cell along three dimensions. Deriving quantile interpolationin the interior of a 3D cell comprises three steps: first, an interpolatedPDF along the cell edges can be computed by applying Eq. 1 to PDFsat grid vertices with parameter α . Second, the interpolated PDFs inthe interior of cell faces can be computed by again applying Eq. 1 tointerpolated PDFs computed in step one with parameter β . Finally, theinterpolated PDF in the interior of 3D cell can be computed by applyingEq. 1 to interpolated PDFs computed in step two with parameter γ . Forbrevity, we represent the probability Pr ( Q i j ) as Pr i for the j’th quantileof random variable X i . Then the formula for the interpolated PDF ofthe j’th quantile, i.e., Pr ( Q j ) , in 3D is as follows: Pr ( Q j ) = Pr Pr Pr Pr Pr Pr Pr Pr t t t t t t t (2)where: t = α Pr + ( − α ) Pr , t = α Pr + ( − α ) Pr , t = α Pr + ( − α ) Pr , t = α Pr + ( − α ) Pr , t = β Pr Pr / t + ( − β ) Pr Pr / t , t = β Pr Pr / t + ( − β ) Pr Pr / t , t = γ Pr Pr Pr Pr / ( t t t ) + ( − γ ) Pr Pr Pr Pr / ( t t t ) , α ∈ [ , ] , β ∈ [ , ] , and γ ∈ [ , ] .Similar to Eq. 1, Eq. 2 simplifies to Pr ( Q j ) = qvalw j , where w j denotesthe width of the j’th quantile of an interpolated random variable X . Thewidth w j is essentially a trilinear interpolation of widths of j’th quantilesfor PDFs of random variables X , ··· , X at 3D cell vertices, evaluatedwith interpolation parameters α , β , and γ . We compute the formula inEq. 2 using the MATLAB Symbolic Math Toolbox. We validate Eq. 2through an experiment on synthetic data. For our experiment, we definehistograms representing continuous probability distributions at eightvertices of a 3D cell. We randomly draw 2 × samples from eachhistogram and perform KDE on the samples to estimate continuousPDFs, pdf X ( x ) ··· pdf X ( x ) , from the samples. A fixed number of quantiles are then computed for estimated PDFs at each of the eightvertices with the quantile value qval = . .
001 results in q = to 2 . For the reduced number of noise samples, the quantileinterpolation results can fluctuate significantly for a relatively highnumber of quantiles, e.g., Fig. 2(f), because of poor KDE caused byreduced sample size.Fig. 2. A probability distribution at a 3D interpolated position computedusing MC sampling (image (a)) vs. our analytic formula (image (f)) forquantile interpolation (Eq. 2). Images (b) and (c) visualize interpolatedPDFs for GMM noise models. Images (d) and (e) visualize interpolatedPDFs for coarse quantile representations. Images (g-i) show plotssimilar to images (d-f), respectively, for reduced sample size. We describe a three-step approach for DVR of uncertain data whendata uncertainty is characterized by histograms or nonparametric dis-tributions. In the first step, we preprocess uncertain data. At eachgrid position, we estimate histogram or nonparametric density fromnoise samples. We then partition the continuous distribution pdf X i ( x ) ateach grid position v i into q quantiles based on a user-set quantile value val and compute a quantile representation of pdf X i ( x ) . We denotequantile representation of pdf X i ( x ) as a tuple { Pr ( Q i ) , ··· , Pr ( Q iq ) } (see Sec. 3.1.2 for details). The quantile representations for PDFs areprovided as inputs to the DVR framework.In the second step, for each sample along a viewing ray of the DVRframework, we look up quantile-based PDFs at the neighboring eightvertices in a fragment shader. We apply Eq. 2 to neighboring vertexquantile densities and compute, in closed form, a quantile represen-tation { Pr ( Q ) , ··· , Pr ( Q q ) } for a continuous distribution pdf X ( x ) ofthe interpolated random variable X . In the third step, the opacity andcolor for an interpolated random variable X can be computed by apply-ing the uncertainty integration framework proposed by Sakhaee andEntezari [51] as follows: E ( TF ( X )) = (cid:90) TF ( x ) pdf X ( x ) dx (3)where TF ( x ) is the color and opacity sampled at intensity x in the TFdomain, and E ( TF ( X ) ) represents the expected value of the classifiedcolor and opacity. The piecewise constant quantile representation forinterpolated pdf X ( x ) , i.e., a tuple t = { Pr ( Q ) = qvalw , ··· , Pr ( Q q ) = qvalw q } , may be integrated with a TF in different ways to produce mean-ingful DVR visualizations. We propose two integration schemes,namely quantile range and quantile mean . Quantile range technique:
In this method, we integrate all in-tensities contained in a quantile with a TF. Suppose [a,b] denote thedomain of interpolated random variable X . By substituting pdf X ( x ) asa quantile-based piecewise constant density function represented by thetuple t in Eq. 3, we get the following formula for the expected fragmentcolor: E ( TF ( X )) = (cid:90) a + w a TF ( x ) Pr ( Q ) dx + ··· + (cid:90) bb − w q TF ( x ) Pr ( Q q ) dx = qvalw (cid:90) a + w a TF ( x ) dx + ··· + qvalw q (cid:90) bb − w q TF ( x ) dx (4)In summary, the expected fragment color can be computed by averagingthe TFs for each quantile and performing a weighted sum of averagecolors computed for each quantile, where the weight is equal to qval . Quantile mean technique:
In this method, we integrate quantilemean intensities with the TF. Let m , ··· m q denote the quantile meansfor quantiles Q ··· Q q , respectively, of interpolated random variable X . Then the discrete probability density for the mean of j’th quantileis Pr ( x = m j ) = p j p , in which p j = qvalw j , and p is a normalizationconstant, i.e., qvalw + ··· + qvalw q . The discretized version of Eq. 3 forthe quantile mean technique results in the following formula for theexpected fragment color: E ( TF ( X )) = q ∑ j = TF ( x = m j ) Pr ( x = m j ) (5) Memory and Time Complexity:
We comment on the memoryand time complexity of our approach for DVR with nonparametricstatistics. The preprocessing step is computationally expensive. It in-volves computing KDE at each grid vertex from noise samples [42, 50]and reducing continuous distributions to their quantile representations.Let L × M × N denote the grid size, and q denote the number of quan-tiles at each grid vertex. The memory consumed by input data for ournonparametric DVR framework is, therefore, L × M × N × ( q + ) , inwhich q + q quantiles (Eq. 2). The compu-tational complexity of the quantile interpolation is linearly proportional to the number of quantiles because of the quantile-wise arithmetic ofprobability distributions in Eq. 2. Hence, the computational complexityof our DVR framework increases linearly with the number of quantiles. The quantile interpolation is an example of order statistics, wherequantiles are ranked based on CDF for a random variable. Specifically,the j’th quantile (ranked by CDF) of input probability distributionpdf X i ( x ) corresponds to the j’th quantile of a distribution computedusing quantile interpolation (see Eq. 2). We take advantage of theorder statistics exhibited by quantile interpolation to derive a box-plot-like view for the input data, which we refer to as the quartileview. Specifically, we visualize populations corresponding to threequantiles, i.e., first 25% (lower quartile), middle 50%, and last 25%(upper quartile), in the quartile view. We compute the expected color foreach of the three populations with our quantile range method (Sec. 3.3).The middle 50% population represents a visualization corresponding tothe interquartile range (IQR) for the input data, which is considered asa robust range in order statistics. Also, the quartile view can help usersunderstand uncertainty in input data by examining commonalities anddifferences among visualizations for the lower, middle 50%, and upperquartiles. Our results in Fig. 5 and Fig. 7 illustrate the quartile views. FOR C LASSIFICATION OF U NCERTAIN D ATA
The reconstruction of the gradient field from the scalar field is an es-sential process in volume rendering [36]. The gradient informationcan be employed in both multidimensional TFs (for material clas-sification) [9] and various local illumination techniques [17]. Thegradient is mathematically the first-order derivative of the scalar field f ( v ) , ∇ f ( v ) = ( ∂ f ( v ) ∂ x , ∂ f ( v ) ∂ y , ∂ f ( v ) ∂ z ) T , and points in the direction ofthe steepest ascent. The reconstruction of the gradient field at an ar-bitrary position v in an uncertain scalar field results in a trivariaterandom vector Y = ( Y x , Y y , Y z ) T , where Y x , Y y , and Y z are random vari-ables denoting the uncertainties of a partial derivative along directions x , y , and z , respectively. The uncertain gradient Y can be obtained fromlinear combinations of random variables at neighboring voxels v i ’s of v : Y x Y y Y z = ∑ i α i β i γ i X i with weights α i β i γ i = ρ ( v − v i ) (6)where ρ : R → R is the basis function that determines the amountof the contribution of each neighboring random variable X i to gradient Y at X . The basis function ρ can be represented as a derivative recon-struction filter obtained by convolving a continuous interpolation filterwith a digital derivative filter [38]. In the context of volume rendering,the most common choice is the combination of linear interpolation andcentral difference. For voxels that fall out of the support of ρ , zeroweights are assigned. As a result, only M neighbors of X are involvedin the gradient estimation, i.e., 1 ≤ i ≤ M .As shown in Eq. 6, the uncertain gradient estimation at arbitrarypoint v is also obtained by linear combinations of random variables X i ’s of the uncertain scalar field. Therefore, the distribution of the gra-dient at v can also be derived analytically using the spline framework.For example, when data uncertainties are modeled as (scaled) uniformdistributions, the distribution of each partial derivative is also a boxspline whose direction vectors are the interpolation weights. In thissetup, the joint distribution of the random vector Y , i.e., the uncertaingradient, is analytically derived as a trivariate box spline whose direc-tions are obtained by the weights in Eq. 6, i.e., the i -th direction vectoris [ α i , β i , γ i ] T . Unlike the gradient, it is challenging to analytically derive the PDFof the gradient magnitude given that random variables Y x , Y y , and Y z ig. 3. Nonparametric (third and fourth rows) vs. parametric statistics (first and second rows) for visualizations of the tangle function: The groundtruth volume visualized in (a) is mixed with noise to generate an ensemble representing uncertain data. The DVR visualizations for variousstatistical models in the first and third row are rendered with the same TF (image (g)). The second and the fourth rows quantify and visualize thedifferences with respect to the ground truth for their corresponding DVR images in the first and third rows, respectively. In images (r) and (u), thewhite arrows illustrate the positions of high reconstruction accuracy and the dotted pink boxes enclose the positions that illustrate error bands.are correlated, even when these random variables are modeled as para-metric normal distributions [35]. Therefore, the directional derivativealong the mean gradient direction has been commonly used as an ap-proximation in uncertainty visualization [43]. Mathematically, thegradient magnitude (cid:107) Y (cid:107) ≈ µ Y (cid:107) µ Y (cid:107) Y , where µ Y is the mean gradient,which can be obtained by substituting X i ’s with their mean values in(6). Since the directional derivative is also linear with respect to theoriginal scalar field, we can formulate the interpolation of intensity X and gradient magnitude (cid:107) Y (cid:107) as linear combinations of M neighboringvoxels of v , i.e., X = ∑ Mi = w i X i and (cid:107) Y (cid:107) = ∑ Mi = u i X i . The joint PDFof the bivariate random variable Z = [ X , (cid:107) Y (cid:107) ] T is exactly describedby a bivariate box spline with parameters [[ w , u ] T , ··· , [ w M , u M ] T ] .Similar to (3), we compute the color at position v by integrating the2D TF with the joint PDF of intensity and gradient magnitude Z , i.e., E ( TF ( Z )) = (cid:82) (cid:82) TF ( x , y ) pdf Z ( x , y ) dxdy . ESULTS AND D ISCUSSIONS
We demonstrate the effectiveness of our nonparametric noise model-ing (Sec. 3.3) for DVR on two kinds of synthetic and real datasets:ensembles of volumetric datasets (Fig. 1a, Fig. 3, and Fig. 6) anddownsampled versions of high-resolution datasets (Fig. 1b, Fig. 8, andFig. 9). Next, we demonstrate improved DVR classifications with2D TFs (Fig. 10, Fig. 11), where underlying uncertain data are repre-sented as probability distributions (Sec. 4) as opposed to mean statistics. All volume renderings are performed on a machine with Nvidia GPUQuadro P6000, with 24 GB memory. We integrate the fragment shadersfor our statistical frameworks into the Voreen volume rendering engine( http://voreen.uni-muenster.de ) for DVR of uncertain data.
Ensemble Datasets:
In Fig. 3, we perform qualitative and quan-titative assessment of DVR reconstruction accuracy for different noisemodels on a synthetic tangle [29] dataset. Fig. 3(a) visualizes theground truth tangle function for a fixed TF design (Fig. 3(g)). The sameTF is used for all visualizations in Fig. 3. The ground truth volume ismixed with noise to generate an ensemble of 50 members representinguncertain data. Specifically, we inject noise samples randomly drawnfrom a bimodal probability distribution, in which the mode with 80%probability concentration is centered around the ground truth, and themode with 20% probability concentration (representing outliers) iscentered far away from the ground truth. The injected noise, thus, has ashape similar to the one-tailed asymmetric distribution.Fig. 3(b) and Fig. 3(c) visualize the results for the mean-field anduniform noise models, respectively. The presence of the outliers in thenoise samples shifts the sample mean at each grid vertex substantially,hence breaking the regions connecting the blobs of the tangle function.The reconstruction with the uniform model still shows improved topo-logical recovery compared to the mean-field. The visualizations usingaussian, GMM (MC), and GMM (ordered) in Fig. 3(d-f), respectively,and our proposed nonparametric models in Fig. 3(k-p) show furtherreconstruction improvements.In the second and fourth rows of Fig. 3, we perform quantitativeanalysis of the reconstruction accuracy for Gaussian, GMM (MC),GMM (ordered), and nonparametric statistical models. Specifically, ateach pixel, we compute an absolute difference between the mean of theRGB values for a DVR image specific to a noise model and the DVRof the ground truth (Fig. 3(a)). We then visualize computed differencesusing a blue-yellow diverging color map, in which yellow and blueindicate relatively high and low difference regions, respectively. InFig. 3, we also report the root mean squared error (RMSE) for eachnoise model. Note that the difference images for the mean and uniformnoise models are not shown in Fig. 3 because of their relatively highRMSE values, i.e., 0 . . × ×
64) using Nvidia’s Frameview tool. Our proposedquantile mean method achieves frame rates comparable to parametricnoise models for eight quantile representations. Note that the framerate is not shown for the GMM (MC) model in Fig. 4(b), as the GMM(MC) noise model [31] uses screen space integration of MC samplesfor producing static images. Fig. 5 visualizes a quartile view (Sec. 3.4)for the tangle dataset. The magenta boxes in Fig. 5 highlight thepositions that exhibit reconstruction variability across the three quartilepopulations, hence indicating data uncertainty in those regions.Fig. 5. The quartile view for the uncertain tangle dataset. The pinkboxes mark positions that exhibit reconstruction variations across threepopulations.Fig. 1(a) visualizes the synthetic teardrop function [29] for an exper-iment similar to the tangle function. For the teardrop dataset, we againanalyze an ensemble of 50 members. In Fig. 1(a), the reconstruction inthe case of the nonparametric density assumption is superior comparedto the parametric density assumptions with ground truth as a reference.We perform an experiment similar to the ones for the tangle andteardrop functions on a real dataset. We analyze the Red Sea eddysimulation ensemble comprising 20 members made available at theIEEE SciVis Contest 2020 ( https://kaust-vislab.github.io/SciVis2020/ ). Each member of the ensemble dataset is generatedbased on the MIT ocean general circulation model (MITgcm) and theData Research Testbed (DART) [23] with varying initial conditions.The ensembles are sampled for 60 time steps on a grid with resolution500 × ×
50 to represent a time-varying 3D flow [52].Fig. 6 shows the DVR of the uncertain velocity magnitude field foran ensemble (time step = 40) over a portion of the domain using themean-field, parametric, and our nonparametric statistical frameworks.Fig. 6(a) visualizes a velocity vector field using arrow glyphs coloredby magnitude for a single ensemble member. High-velocity magnitudegenerally is observed on a vortex rim. Fig. 6(b) visualizes the result forthe mean-field. We set a TF (Fig. 6(c)) for the mean-field visualization,such that relatively high-, moderate-, and low-velocity magnitudes areassigned red, blue, and yellow, respectively. The opacities are set torecover eddy-like structures from the dataset. The same TF is thenused for all visualization in Fig. 6. The results for all noise modelsin Fig. 6 look significantly different from the mean-field visualization.Fig. 7 visualizes a quartile view for the Red Sea dataset. The dottedblack boxes in Fig. 7 highlight the positions where the variability orig. 6. Visualizations of an uncertain velocity magnitude field for theRed Sea eddy simulations: (a) arrow glyph visualization of a velocityvector field for a single member colored by magnitude, (b, d-j) DVRusing various statistical models with the TF shown in image (c). Thered, blue, and yellow in the TF indicate relatively high-, moderate-,and low-velocity magnitudes. QR(8) and QM(8) denote our proposedquantile range and mean techniques, respectively, with eight quantiles.uncertainty of eddy presence is prominent across populations belongingto the lower, central 50%, and upper quartiles. In contrast, the solidblack boxes highlight the positions where eddy presence is consistentlyobserved across the three populations.Fig. 7. The quartile view for the Red Sea eddy dataset. The solidand dotted boxes indicate the positions with relatively high and lowconfidence, respectively, regarding the eddy presence.
Downsampled Datasets:
We perform an experiment on a nestedspheres dataset similar to the experiment described in Section 4.2 of theprevious work on GMM-based DVR [31]. In Fig. 8, we compare thevisualizations for various noise models. For our experiment, we sam-ple the nested spheres function on a high-resolution 512 × × × × et al. [55]) and apply our DVR framework forvisualization of the distribution data. Fig. 8(d-i) visualize the results for Fig. 8. Visualizations of the nested spheres function: All results arerendered with the same TF shown in image (b).various noise models, in which our quantile mean technique exhibitsthe lowest RMSE. Our closed-form nonparametric models enable usto efficiently generate visualizations without needing to perform MCsampling or screen space integration as in [31].Fig. 9. Zoomed-in views for the Osirix OBELIX dataset visualizationsin Fig. 1: All results are rendered with the same TF (image (b)). Theground truth in (a) with resolution 512 × × × × Parameter Sensitivity:
We briefly discuss the parameters of statis-tical models that can potentially influence the quality of visualizations,and hence, RMSE. In the case of our proposed nonparametric models,the reliability of quantile interpolation results is, admittedly, sensitive tothe sample size with respect to the number of quantiles (see Fig. 2 andig. 4(a)), which in turn depends on the complexity of underlying datadistributions. We ensure a sufficient sampling density for the datasetat hand, e.g., 50 for the tangle dataset and 20 for the Red Sea eddysimulations, with an empirical approach. Specifically, we keep track ofhow much visualizations vary with increasing the sample size. We stopwhen the increase in sample size does not change or affect the quality ofvisualizations (more details in the supplementary material). We followthe same empirical approach for ensuring sufficient sampling densityfor all nonparametric statistical visualizations.In the case of the parametric models, the RMSE is again sensitive tothe choice of the parameter values. For example, for the uniform andGaussian noise models, we estimate the mean and width/variance fromthe noise samples. Adjusting the Gaussian variance estimated from thenoise samples can, however, significantly improve the classificationresult, as we have demonstrated for the Osirix OBELIX dataset inFig. 9(c) and (d). In the case of our GMM visualizations, we use fourGaussians per mixture since they consume a memory comparable tothe quantile interpolation with eight quantiles. Still, the estimationof the parameters, such as the number of Gaussians, regularizationvalue, covariance matrix estimation, and number of MC samples, canbe further improved to enhance the quality of the GMM results. Notethat the GMMs demand a memory consumption equal to three timesthe number of Gaussians in a GMM (for storing mean, variance, andweight per Gaussian). In summary, the study regarding the choice ofthe noise model in the context of DVR and analytical identification ofoptimal parameter values for data with arbitrary complexity of noisedistributions is nontrivial, and we plan to research it in the future.
We demonstrate our statistical rendering with 2D TFs using the classictooth dataset in Fig. 10. Fig. 10(a) shows the ground truth rendering(top) with a 2D TF (bottom), where the tooth is classified as fourmaterials: dentine (yellow), tooth-holding material (blue), enamel (red),and root and dentine boundaries (green). Fig. 10(b) shows the standardDVR of the bivariate field characterized by mean intensities and themean gradient magnitudes for data at reduced resolutions. Fig. 10(c)shows our statistical rendering using the interpolated distributions ofuncertain intensities and gradient magnitudes at the same reducedresolutions as Fig. 10(b). When comparing Fig. 10(b) and Fig. 10(c),our methods show improved recovery of the features in the ground truth,whereas the mean-field visualizations result in poor classifications.Fig. 10. Volume rendering of the tooth dataset with a 2D TF.Thompson et al. [55] proposed a technique to visualize fuzzy isosur-faces of a scalar field in reduced data. The multifield equivalents ofisosurfaces are fiber surfaces [5]. In the bivariate case, a fiber surface isa contour defined by a curve that is composed of a number of points(i.e., fibers) in the 2D range space. In the case of reduced bivariatedata, our proposed uncertainty-aware 2D TF integration scheme can beapplied for the visualization of fuzzy fiber surfaces . Specifically, we canmodel the uncertainties at grid points of the reduced data as the tensor Fig. 11. Visualizations of fiber surfaces with 600 < Pressure < = .
011 in the top row and 1500 < Pressure < = .
028 in the bottom row.product of two univariate box splines defined by any two uncertainfields. Then, the 2D TF reconstruction scheme introduced in Sec. 4.2can be directly applied to the spline-modeled bivariate uncertain field.We conducted experiments on a multifield dataset,
Isabel , a simula-tion of hurricane Isabel from the West Atlantic region in 2003. The datadimensions are 500 × ×
100 with 48 time steps. Fig. 11(a) showsthe volume rendering of a fiber surface defined in a bivariate field:Pressure and Water Vapor of
Isabel at time step 18. The fiber surface isidentified by a line in the 2D range space with 600 < Pressure < = .
011 (the top row of Fig. 11). When the data arerepresented as the mean statistics with a resolution of 100 × × < Pressure < = . ONCLUSION AND F UTURE W ORK
We expand the spline-based parametric DVR framework [51] to moreflexible nonparametric statistics for visualization of an uncertain scalarfield. We leverage the quantile interpolation technique [21, 49] for effi-cient integration of nonparametric PDFs with a DVR framework. Weevaluate our proposed nonparametric statistical models by presentingqualitative and quantitative comparisons with respect to mean-field andparametric statistical models. We show that the time and memory com-plexity of our nonparametric DVR framework increases linearly withthe number of quantiles used in quantile interpolation. We demonstratethe application of a previous study [51] to 2D TFs for improved DVRclassification compared to the 2D TF classification with mean statistics.Our approach has a few limitations that we plan to address in sub-sequent work. In this work, we employ quantile interpolation witheven quantile values. We would like to study an efficient use of unevenquantile values for improving DVR classification accuracy, which webriefly discuss in the supplementary material. Next, we would like toinvestigate methods for efficient implementation of an exponentiallycomplex box-spline framework [51] (see also the supplementary mate-rial) and study their effectiveness in DVR for uncertain data. We plan togeneralize our uncertainty-aware 2D TF framework (Sec. 4) to TFs witha variable number of dimensions using nonparametric statistics. Finally,we plan to study the nonparametric models for dependent random fieldsfor further improvements in DVR reconstruction accuracy. A CKNOWLEDGMENTS
This work was supported in part by the NSF grant IIS-1617101; theNIH grants P41 GM103545-18 and R24 GM136986; the DOE grantDE-FE0031880; and the Intel Graphics and Visualization Institutes ofXeLLENCE.
EFERENCES [1] T. Athawale and A. Entezari. Uncertainty quantification in linear interpo-lation for isosurface extraction.
IEEE Transactions on Visualization andComputer Graphics , 19(12):2723–2732, Oct. 2013. doi: 10.1109/TVCG.2013.208[2] T. Athawale and C. R. Johnson. Probabilistic asymptotic decider for topo-logical ambiguity resolution in level-set extraction for uncertain 2D data.
IEEE Transactions on Visualization and Computer Graphics , 25(1):1163–1172, Jan 2019. doi: 10.1109/TVCG.2018.2864505[3] T. Athawale, E. Sakhaee, and A. Entezari. Isosurface visualization ofdata with nonparametric models for uncertainty.
IEEE Transactions onVisualization and Computer Graphics , 22(1):777–786, Jan. 2016. doi: 10.1109/TVCG.2015.2467958[4] K. Brodlie, R. Allendes Osorio, and A. Lopes. A review of uncertaintyin data visualization.
Expanding the Frontiers of Visual Analytics andVisualization , pp. 81–109, 2012. doi: 10.1007/978-1-4471-2804-5 6[5] H. Carr, Z. Geng, J. Tierny, A. Chattopadhyay, and A. Knoll. Fibersurfaces: Generalizing isosurfaces to bivariate data.
Computer GraphicsForum , 34(3):241–250, July 2015. doi: 10.1111/cgf.12636[6] C. Correa, Y. H. Chan, and K. L. Ma. A framework for uncertainty-awarevisual analytics. In
Proceedings of 2009 IEEE Symposium on VisualAnalytics Science and Technology (Vast 2009) , pp. 51–58. IEEE, October2009. doi: 10.1109/VAST.2009.5332611[7] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incom-plete data via the em algorithm.
Journal of the Royal Statistical Society,Series B , 39(1):1–38, 1977.[8] S. Djurcilov, K. Kim, P. Lermusiaux, and A. Pang. Visualizing scalarvolumetric data with uncertainty.
Computers and Graphics , 26(2):239–248, April 2002. doi: 10.1016/S0097-8493(02)00055-9[9] R. A. Drebin, L. Carpenter, and P. Hanrahan. Volume rendering. In
Proceedings of the 15th Annual Conference on Computer Graphics andInteractive Techniques , SIGGRAPH ’88, pp. 65–74. ACM, New York, NY,USA, June 1988. doi: 10.1145/54852.378484[10] T. Etiene, D. J¨onsson, T. Ropinski, C. Scheidegger, J. L. Comba, L. G.Nonato, R. M. Kirby, A. Ynnerman, and C. T. Silva. Verifying volumerendering using discretization error analysis.
IEEE transactions on Vi-sualization and Computer Graphics , 20(1):140–154, Jan 2014. doi: 10.1109/TVCG.2013.90[11] T. Etiene, R. Kirby, and C. Silva.
An Introduction to Verification ofVisualization Techniques . Morgan & Claypool Publishers, 2015. doi: 10.2200/S00679ED1V01Y201511CGR022[12] G. Favelier, N. Faraj, B. Summa, and J. Tierny. Persistence atlas forcritical point variability in ensembles.
IEEE Transactions on Visualizationand Computer Graphics , 25(1):1152 – 1162, September 2018. doi: 10.1109/TVCG.2018.2864432[13] W. Feller.
An introduction to probability theory and its applications:volume I , vol. 3. John Wiley & Sons New York, Jan 1968.[14] N. Fout and K. L. Ma. Fuzzy volume rendering.
IEEE Transactions onVisualization and Computer Graphics , 18(12):2335–2344, Oct 2012. doi:10.1109/TVCG.2012.227[15] M. G. Genton, C. R. Johnson, K. Potter, G. Stenchikov, and Y. Sun. Surfaceboxplots.
Stat Journal , 3(1):1–11, 2014. doi: 10.1002/sta4.39[16] D. G¨unther, J. Salmon, and J. Tierny. Mandatory critical points of 2Duncertain scalar fields.
Computer Graphics Forum , 33(3):31–40, July2014. doi: 10.1111/cgf.12359[17] M. Hadwiger, J. M. Kniss, C. Rezk-salama, D. Weiskopf, and K. Engel.
Real-time Volume Graphics . A. K. Peters, Ltd., Natick, MA, USA, 2006.[18] W. He, H. Guo, H. W. Shen, and T. Peterka. eFESTA: Ensemble featureexploration with surface density estimates.
IEEE Transactions on Visu-alization and Computer Graphics , 26(4):1716–1731, Nov 2018. doi: 10.1109/TVCG.2018.2879866[19] F. V. Higuera, N. Sauber, B. Tomandl, C. Nimsky, G. Greiner, and P. Has-treiter. Automatic adjustment of bidimensional transfer functions for directvolume visualization of intracranial aneurysms. In
Proc. SPIE 5367, Med-ical Imaging 2004: Visualization, Image-Guided Procedures, and Display ,pp. 275–284. International Society for Optics and Photonics, May 2004.doi: 10.1117/12.535534[20] R. V. Hogg, A. T. Craig, and J. W. McKean.
Introduction to mathematicalstatistics, 6th ed . Pearson, 2004.[21] B. Hollister and A. Pang. Interpolation of non-Gaussian probability distri-butions for ensemble visualization. Technical report, Jack Baskin Schoolof Engineering, Oct 2013. [22] B. Hollister and A. Pang. Bivariate quantile interpolation for ensemble de-rived probability density estimates.
International Journal for UncertaintyQuantification , 5(2):123–137, 2015. doi: 10.1615/Int.J.UncertaintyQuan-tification.2015011789[23] I. Hoteit, T. Hoar, G. Gopalakrishnan, J. Anderson, N. Collins, B. Cor-nuelle, A. Khl, and P. Heimbach. A MITgcm/DART ocean predictionand analysis system with application to the Gulf of Mexico.
Dynamicsof Atmospheres and Oceans , 63:1–23, September 2013. doi: 10.1016/j.dynatmoce.2013.03.002[24] C. R. Johnson. Top scientific visualization research problems.
IEEEComputer Graphics and Applications: Visualization Viewpoints , 24(4):13–17, July/August 2004. doi: 10.1109/MCG.2004.20[25] C. R. Johnson and A. R. Sanderson. A next step: Visualizing errors anduncertainty.
Computer Graphics and Applications, IEEE , 23(5):6–10,Sept.-Oct. 2003. doi: 10.1109/MCG.2003.1231171[26] J. Kniss, G. Kindlmann, and C. Hansen. Interactive volume rendering usingmulti-dimensional transfer functions and direct manipulation widgets. In
Proceedings of the conference on Visualization’01 , pp. 255–262. IEEEComputer Society, Oct 2001. doi: 10.1109/VISUAL.2001.964519[27] J. Kniss, G. Kindlmann, and C. Hansen. Multidimensional transfer func-tions for interactive volume rendering.
IEEE Transactions on Visualizationand Computer Graphics , 8(3):270–285, Nov 2002. doi: 10.1109/TVCG.2002.1021579[28] J. M. Kniss, R. V. Uitert, A. Stephens, G.-S. L., T. Tasdizen, and C. Hansen.Statistically quantitative volume visualization. In
Proceedings of IEEEVisualization 2005 , pp. 287–294, Oct 2005. doi: 10.1109/VISUAL.2005.1532807[29] A. Knoll, Y. Hijazi, A. Kensler, M. Schott, C. Hansen, and H. Hagen. Fastray tracing of arbitrary implicit surfaces with interval and affine arithmetic.28(1):26–40, Feb 2009. doi: 10.1111/j.1467-8659.2008.01189.x[30] J. Kronander, J. Unger, T. M¨oller, and A. Ynnerman. Estimation andmodeling of actual numerical errors in volume rendering. In
ComputerGraphics Forum , vol. 29, pp. 893–902. Wiley Online Library, 2010.[31] S. Liu, J. A. Levine, P.-T. Bremer, and V. Pascucci. Gaussian mixturemodel based volume visualization. In
IEEE Symposium on Large DataAnalysis and Visualization (LDAV) . IEEE, Oct 2012. doi: 10.1109/LDAV.2012.6378978[32] S. L¨opez-Pintado and J. Romo. On the concept of depth for functionaldata.
Journal of the American Statistical Association , 104(486):718–734,June 2009. doi: 10.1198/jasa.2009.0108[33] C. Lundstrom, P. Ljung, A. Persson, and A. Ynnerman. Uncertaintyvisualization in medical volume rendering using probabilistic animation.
IEEE Transactions on Visualization and Computer Graphics , 13(6):1648–1655, Nov 2007. doi: 10.1109/TVCG.2007.70518[34] B. Ma and A. Entezari. Volumetric feature-based classification and visibil-ity analysis for transfer function design.
IEEE Transactions on Visualiza-tion and Computer Graphics , 24(12):3253–3267, 2018.[35] A. Mathai and S. Provost.
Quadratic Forms in Random Variables . Statis-tics: A Series of Textbooks and Monographs. Taylor & Francis, 1992.[36] Z. Mihajlovic, Z. Comput., L. Budin, and N. Quid. Reconstruction ofgradient in volume rendering.
IEEE Transactions on Visualization andComputer Graphics , Dec 2003. doi: 10.1109/ICIT.2003.1290301[37] M. Mirzargar, R. Whitaker, and R. M. Kirby. Curve boxplot: Gener-alization of boxplot for ensembles of curves.
IEEE Transactions onVisualization and Computer Graphics , 20(12):2654–63, Nov 2014. doi:10.1109/TVCG.2014.2346455[38] T. M¨oller, R. Machiraju, K. Mueller, and R. Yagel. A comparison ofnormal estimation schemes. In
Proceedings of the IEEE Conference onVisualization 1997 , pp. 19–26. IEEE, Oct. 1997. doi: 10.1109/VISUAL.1997.663848[39] M. Otto, T. Germer, H.-C. Hege, and H. Theisel. Uncertain 2D vectorfield topology. 29(2):347–356, June 2010. doi: 10.1111/j.1467-8659.2009.01604.x[40] M. Otto, T. Germer, and H. Theisel. Uncertain topology of 3D vectorfields. In , pp. 67–74. IEEE,2011. doi: 10.1109/PACIFICVIS.2011.5742374[41] A. T. Pang, C. M. Wittenbrink, and S. K. Lodha. Approaches to uncertaintyvisualization.
The Visual Computer , 13(8):370–390, Nov 1997. doi: 10.1007/s003710050111[42] E. Parzen. On estimation of a probability density function and mode.
TheAnnals of Mathematical Statistics , 33(3):1065–1076, 1962.[43] T. Pfaffelmoser, M. Mihai, and R. Westermann. Visualizing the variabilityof gradients in uncertain 2D scalar fields.
IEEE Transactions on Visual-zation and Computer Graphics , 19(11):1948–1961, June 2013. doi: 10.1109/TVCG.2013.92[44] T. Pfaffelmoser, M. Reitinger, and R. Westermann. Visualizing the posi-tional and geometrical variability of isosurfaces in uncertain scalar fields.
Computer Graphics Forum , 30(3):951–960, June 2011. doi: 10.1111/j.1467-8659.2011.01944.x[45] H. Pfister, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L. S. Avila,K. Raghu, R. Machiraju, and J. Lee. The transfer function bake-off.
IEEEComputer Graphics and Applications , 21(3):16–22, May 2001. doi: 10.1109/38.920623[46] K. P¨othkow and H. C. Hege. Positional uncertainty of isocontours: Con-dition analysis and probabilistic measures.
IEEE Transactions on Visual-ization and Computer Graphics , 17(10):1393–1406, Nov 2010. doi: 10.1109/TVCG.2010.247[47] K. P¨othkow and H.-C. Hege. Nonparametric models for uncertaintyvisualization.
Computer Graphics Forum , 32(3.2):131–140, July 2013.doi: 10.1111/cgf.12100[48] K. Potter, P. Rosen, and C. R. Johnson. From quantification to visualization:A taxonomy of uncertainty visualization approaches. In
UncertaintyQuantification in Scientific Computing , pp. 226–249. Springer, 2012. doi:10.1007/978-3-642-32677-6 15[49] A. L. Read. Linear interpolation of histograms.
Nuclear Instrumentsand Methods in Physics Research Section A: Accelerators, Spectrometers,Detectors and Associated Equipment , 425(1-2):357–360, April 1999. doi:10.1016/S0168-9002(98)01347-3[50] M. Rosenblatt. Remarks on some nonparametric estimates of a densityfunction.
The Annals of Mathematical Statistics , 27(3):832–837, 1956.[51] E. Sakhaee and A. Entezari. A statistical direct volume rendering frame-work for visualization of uncertain data.
IEEE Transactions on Visual-ization and Computer Graphics , 23(12):2509–2520, Dec 2017. doi: 10.1109/TVCG.2016.2637333[52] S. Sivareddy, H. Toye, P. Zhan, S. Langodan, G. Krokos, O. Knio, andI. Hoteit. Impact of atmospheric and model physics perturbations on ahigh-resolution ensemble data assimilation system of the Red Sea.
Journalof Geophysical Research-Oceans , 2020.[53] S. K. Suter, B. Ma, and A. Entezari. Visual analysis of 3d data by isovalueclustering. In
International Symposium on Visual Computing , pp. 313–322.Springer, 2014. doi: 10.1007/978-3-319-14249-4 30[54] N. Svakhine, D. S. Ebert, and D. Stredney. Illustration motifs for effectivemedical volume illustration.
IEEE Computer Graphics and Applications ,25(3):31–39, June 2005. doi: 10.1109/MCG.2005.60[55] D. Thompson, J. Levine, J. Bennett, P. Bremer, A. Gyulassy, V. Pascucci,and P. Pebay. Analysis of large-scale scalar data using hixels. In , pp.23–30, 2011. doi: 10.1109/LDAV.2011.6092313[56] R. Whitaker, M. Mirzargar, and R. Kirby. Contour boxplots: A method forcharacterizing uncertainty in feature sets from simulation ensembles.
IEEETransactions on Visualization and Computer Graphics , 19(12):2713–2722,Oct. 2013. doi: 10.1109/TVCG.2013.143[57] P. C. Wong, H. W. Shen, C. R. Johnson, C. Chen, and R. B. Ross. The top10 challenges in extreme-scale visual analytics.
IEEE Computer Graphicsand Applications , 32(4):63–67, Aug 2012. doi: 10.1109/MCG.2012.87[58] H. Younesy, T. M¨oller, and H. Carr. Improving the quality of multi-resolution volume rendering. In
EUROVIS’06: Proceedings of the EighthJoint Eurographics / IEEE VGTC conference on Visualization , pp. 251–258. Citeseer, May 2006.[59] T. Zuk and S. Carpendale. Theoretical analysis of uncertainty visualiza-tions. In