II NTERACTIVE VOLUME ILLUMINATION OF SLICE - BASED RAYCASTING
Dening Luo
College of Computer Science, SiChuan UniversityNo. 24 South Section 1, Yihuan Road, Chengdu Sichuan, China, 610065 [email protected]
August 17, 2020 A BSTRACT
Volume rendering always plays an important role in the field of medical imaging and industrialdesign. In recent years, the realistic and interactive volume rendering of the global illumination canimprove the perception of shape and depth of volumetric datasets. In this paper, a novel and flexibleperformance method of slice-based ray casting is proposed to implement the volume illuminationeffects, such as volume shadow and other scattering effects. This benefits from the slice-basedillumination attenuation buffers of the whole geometry slices at the viewpoint of the light source andthe high-efficiency shadow or scattering coefficient calculation per sample in ray casting. These testsshow the method can obtain much better volume illumination effects and more scalable performance incontrast to the local volume illumination in ray casting volume rendering or other similar slice-basedglobal volume illumination. K eywords Volume rendering · Volumetric datasets · Volume illumination · Slice-based ray casting · Volume shadow
In the numerous visualization fields of volumetric datasets, medical imaging and industrial design applications are thetwo more active domains of exploration. Computed tomography (CT)/magnetic resonance imaging (MRI) datasetsare visualized intuitively with full 3D effects in medical imaging. In industrial design, some intermediate results arevisually simulated from the data flow or structural data analysis. Therefore, the visualization of volumetric datasets hasproduced great practical value.With the development of data visualization techniques, plenty of volume rendering methods have also been widelyproposed. Some have been used in practical products, such as slice-based rendering, ray casting volume rendering, etc.Different ways have both strengths and weaknesses when applied to various specific applications. Fortunately, volumerendering methods are still being studied, optimized, and improved to benefit better real-world applications, amongwhich volume illumination mainly addresses problems of the visual perception of depth and spatial structure.Usually, volume illumination requires a complicated procedure of illumination calculation or data processing. Particu-larly, when the data size and algorithm complexity increase tremendously, more and more significant challenges arepresented for interactive volume rendering. Nowadays, figuring out how to better utilize GPU and novel renderingtechniques to resolve performance and effect issues has also become a significant challenge.In this paper, the interactive volume illumination of slice-based ray casting is proposed to obtain the scalable performanceand the outstanding volume illumination effects to tackle the challenges mentioned above. The specific and promisingcontribution are as follows: • This is the first time to combine the two methods based on slice-based volume rendering and ray castingvolume rendering to render the global volume illumination of volumetric datasets. In this way, the advantagesof the respective algorithms can be used to improve the efficiency and effect. The slice-based is easy to a r X i v : . [ c s . G R ] A ug PREPRINT - A
UGUST
17, 2020simplify the calculation of the volumetric illumination process, while the ray casting method can efficientlyrender volumetric datasets. • The idea of having illumination attenuation slices from the light source that can be used in volume ray castingfor calculating the shadow or some scattering effects. Meanwhile, varying the number of slices and per-sliceimage resolution allows for flexible performance in volume illumination rendering. • The scattering coefficient of two types, shell and cone, approximates the calculation of volume illuminationintegration in volume ray casting. The use of illumination attenuation slices with the use of a scatteringcoefficient can remove any problems from alpha-blending in contrast to the slice-based volume illumination.
Volume visualization has always been one of the most interesting areas in scientific visualization [1]. Specifically, it isabout a method of extracting meaningful information from volumetric datasets using interactive graphics and imagingtechniques. It is concerned with the representation, manipulation, and rendering of volumetric datasets. Meanwhile,with the development of display devices, volumetric datasets from numerous simulation and sampling devices such asMRI, CT, positron emission tomography (PET), ultrasonic imaging, confocal microscopy, supercomputer simulations,geometric models, laser scanners, depth images estimated by stereo disparity, satellite imaging, and sonar can beefficiently visualized and demonstrated on web [2], mobile [3, 4] or virtual reality [5] platforms or devices.Volume rendering has been developed for many years, and a large number of interactive rendering techniques havebeen proposed and practically applied [6]. Currently, direct volume rendering has become more important due toeffectiveness compared with indirect volume rendering, which is a method to reconstruct geometry from volumetricdatasets [7]. Therefore, different methods are proposed to achieve direct volume rendering such as slice-based [8, 9],ray casting [10, 11], shear-warp [12] and splatting [13]. In summary, a clear and fast representation of the 3D structuresand internal details of volumetric datasets is the key task of volume visualization.The 3D texture slicing volume rendering is the simplest and fastest slice-based approach on GPU. It approximates thevolume-density function by slicing the datasets in front-to-back or back-to-front order and then blends the slices usinghardware-supported blending. The view-aligned 3D texture slicing [14, 15] is a classic method, and it takes advantageof functionalities implemented on graphics hardware like rasterization, texturing and blending. The half-angle slicingfor volume shadow [16, 17] is also a global illumination technique. The half-angle slicing requires a large number ofpasses; thus, the draw calls are extensive to result in high-performance overhead. Meanwhile, various compositionschemes [18] are used with particular purposes, including first, average, maximum intensity projection, accumulation,etc.Ray casting volume rendering is easily implemented on a single-pass pass GPU. Meanwhile, the normal at the samplingpoint in ray casting can be estimated to carry out the lighting calculation. Although local volume illumination isadequate for shading surfaces, it does not provide sufficient lighting characteristics for the visual appearance. Theglobal volume illumination [19, 20] presents detailed lighting characteristics in contrast to the local volume illumination.Perceptually motivated techniques for the visualization add additional insights when displaying the volumetric datasetssince global illumination techniques can obviously improve spatial depth and shape cues, thus provide better perception[21, 22]. Subsequently, based on the directional occlusion approach by Schott et al. [23], later approaches [24, 25] usedconvolution-based illumination where on the directional viewing and illumination buffers can be computed in the samepass using multiple render targets.The transfer function (TF) [26, 27] for volume rendering is a central topic in scientific visualization and is used topresent more details of volumetric datasets, for example implementing volume classification. Besides, the center topicsin scientific visualization are parallel volume rendering [28, 29], and learning-based volume rendering [30] in the future.Considering the better perception of volumetric datasets and the flexibility and scalability of the growing data, thispaper’s slice-based ray casting algorithm is proposed to demand the increasing practical applications of interactivevolume rendering.
Volumetric datasets are usually a 3D array of volume elements or voxels, so volume rendering is the reconstructionprocess of displaying each point in a volume. Voxels can represent various physical characteristics, such as density,temperature, velocity, and pressure. Typically, the volume datasets store densities obtained using a cross-sectionalimaging modality such as CT or MRI scans. A 3D texture, which is an array of 2D textures, is obtained by accumulatingthese 2D slices along the slicing axis. 2
PREPRINT - A
UGUST
17, 2020For a long time, volume rendering implementations are almost exclusively based on slice-based methods where axis-aligned or view-aligned texture slicing is blended to approximate the volume rendering integration. Volume renderingintegration is a complex process and is difficult to compute because the complete equation of light transportation iscomputationally intensive. So, in most practical applications, simplified models are often used. The conventionalmodels include absorption only, emission only, emission-absorption model, single scattering and shadowing, andmultiple scattering.Light will attenuate as the medium travels. For slice-based illumination attenuation, the illumination intensity of theslice attenuates proportionally with the distance to the light source, and the simulation is performed slice by sliceaccording to the blend function. We can obtain the illumination attenuation per voxels along the light direction and usethe idea of storing illumination attenuation slices to calculate the illumination of volumetric datasets in ray casting.Correctly, on the one hand, the illumination attenuation per voxel of the whole volume under the light source can bestored in a buffer; on the other side, the ray casting can traverse the entire volumetric datasets to reconstruct volumeinformation per pixel in image space. Therefore, combined with the advantages of both, we can achieve some bettervolume illumination effects.
Light (a)Camera LightCamera (b)Volumetric datasets S Ray (a) Render the whole geometry slices of volumetric datasets into
LightBuf f er slice by slice at theviewpoint of the light source. (b) Reconstruct the 3D volume using volume ray casting and compute thevolume illumination effects using the
LightBuf f er at the viewpoint of the camera.Figure 1: The rendering process of slice-based ray casting.As shown in
Figure 1 , the slice-based ray casting first renders the whole geometry slices of volumetric datasets into
LightBuf f er slice by slice at the view viewpoint of the light source. And then reconstruct the 3D volume usingvolume ray casting and compute the volume illumination effects using the
LightBuf f er at the viewpoint of thecamera. Correctly, the following three approaches are used to calculate the illumination coefficient of each voxel. Theprocedure which directly calculates the attenuation coefficient of the sample point in ray casting is the simplest methodto compute volume shadow. The shell and cone scattering distributions are the approximation approaches to the globalillumination effects of volumetric datasets.
Volume illumination can present the spatial perception of volumetric datasets, and it is different from the surface/localillumination mode. Usually, the volume illumination mode [31] in its differential form is solved by integrating (seeEquation 1) along the light direction from the starting point s to the endpoint s e (see Figure 2 ). The light intensity I s e at s e consists of two parts, which one is the attenuation intensity along the ray, and the other is the integratedcontributions of all points along the ray. 3 PREPRINT - A
UGUST
17, 2020
Ray s s s e Figure 2: A ray along the light direction in volume illumination. (cid:26) I s e = I s e − τ ( s ,s e ) + (cid:82) s e s q ( s ) e − τ ( s,s e ) dsτ ( s, s e ) = (cid:82) s e s κ ( t ) dt (1)with I s is initial intensity at s , e − τ ( s ,s e ) is the attenuation along the ray, optical depth τ , optical properties κ (absorption coefficient) and q (source term describing emission).In most practical applications, simplified models are often used because the complete equation of light transportation iscomputationally intensive. The emission-absorption model, which is most common in volume rendering, is used. Forthe emission-absorption model, the accumulated color and opacity are computed according to Equation 2, where C i and A i are the color and opacity assigned by the transfer function to the data value at sample i . Opacity A i approximatesthe absorption, and opacity-weighted color C i approximates the emission and the absorption along the ray segmentbetween samples i and i + 1 . (cid:26) C = (cid:80) ni =1 C i (cid:81) i − j =1 (1 − A j ) A = 1 − (cid:81) nj =1 (1 − A j ) (2)For the iterative computations of the discretization volume integration, the blend function is different for either front-to-back or back-to-front. As shown in Equation 3 and Equation 4, respectively, the front-to-back means rendering proceedsin front-to-back order to the eye, and the back-to-front means in back-to-front order to the eye. Variables with subscript src ( as for "source") describe quantities introduced as inputs from the optical properties of the data set (e.g., through atransfer function). In contrast, variables with subscript dst (as for "destination") describe output quantities that holdaccumulated colors and opacities. (cid:26) C dst ← C dst + (1 − α dst ) C src α dst ← α dst + (1 − α dst ) α src (3) C dst ← (1 − α src ) C dst + C src (4) Direct volume rendering integration (Equation 5) [32] can not be solved analytically without making some confiningassumptions and consequently needs to be approximated. A slightly coarse approximation, the classical α -blending,is the commonly used development of the original extinction coefficient into a Taylor series where only the first twoelements are considered. A closer approximation based on the initial extinction coefficient (Equation 6) supplies a moreaccurate evaluation of the volume rendering integration on programmable GPUs since the advantage of integrating overthe exponential extinction coefficient is a summation in contrast to the product of α -blending.In Equation 5, a ray from s = 0 at the back of the volume to s e at the eye position is considered. The extinctioncoefficient is indicated by τ ( s ) , and E ( s ) is the light reflected or emitted by a volume sample at s . I s e = (cid:90) s e E ( s ) τ ( s ) e − (cid:82) se τ ( t ) dt ds (5)In the discretization of Equation 6 using a step size ( ∆ t ) along the ray, instead of performing a Taylor series expansionand simplification of the extinction term and the original exponential extinction coefficient can be retained. Theformulation is sufficiently simple and can easily be implemented on programming GPUs. The additive property of τ allows for the summation of the samples in a shader in arbitrary order. The basic premise is that any light occlusion andshadowing effects arise from the attenuation of light traveling or being scattered through the volume along a ray or4 PREPRINT - A
UGUST
17, 2020within some specific region. Therefore, any light attenuation stems from some extinction factor e − (cid:80) j ∆ tτ j where thesum (cid:80) j ∆ tτ j must be taken over a ray or region of the volume. I s e ≈ s e / ∆ t (cid:88) i =0 E i ∆ tτ i e − (cid:80) se/ ∆ tj = i ∆ tτ j (6)where the reflected and emitted light E i is typically replaced by a voxel color modulated by a simple lighting model.The dense ray calculation of each sample for the light source is costly, mainly referring to light scattering. So, weconsider the illumination attenuation of each voxel and compute each sample’s attenuation coefficient to simplify thevolume illumination calculation in Equation 7. Moreover, we use the neighborhood sample to yield the necessaryextinction of the light on its way. Many scattering lights are estimated by distance function and neighborhood voxels. Amore sparsely occluded neighborhood is an indicator that more light is scattered to the sample, thus supporting visuallyplausible soft shadow borders. I s e ≈ s e / ∆ t (cid:88) i =0 A i E i ∆ t (7)where the attenuation coefficient A i is the weight of each sample point for volume illumination. Volume illumination is often a complex and expensive process since illumination calculation needs to trace thepropagation of large amounts of light and interact with the scene. The processes of complex illumination calculationsare usually simplified to meet the real-time application. Therefore, the paper cleverly uses slice-based illuminationattenuation to calculate the propagation of light within a volume. It is a scalable and effective way to calculate theillumination process of volumetric datasets since varying the slice distances and per-slice image resolution allows forflexible performance.The algorithm 1 shows the process of building illumination attenuation buffer slice by slice. The primary method isto render each slice of the whole geometry slices of the volumetric datasets into illumination attenuation array at theviewpoint of the light source and accumulate the illumination attenuation buffer slice by slice. The illumination intensityof the current slice is only determined by multiplying the sample color opacity α by illumination color. Each slicecolor is determined by blending the illumination attenuation of previous slices into the frame buffer in back-to-front.For illumination attenuation, the illumination intensity of the slice attenuates proportionally under the light source, asshown in Figure 3 . The second layer slice is the blended illumination intensity of the first layer slice. The volumeshadow is calculated by the illumination attenuation slice by slice.
Algorithm 1
Build illumination attenuation buffer slice by slice Generate the whole geometry slices; Initialize
LightBuf f er ; Render all slices at the viewpoint of light source; for each slice do for each sample do Evaluate sample opacity α ; Multiply α by illumination color; Blend slice into the frame buffer in back-to-front; end for Read the frame buffer image and write into
LightBuf f er ; end for For the illumination attenuation buffer, the volumetric datasets need to be sliced into geometry slices and rendered sliceby slice at the viewpoint of the light source. The whole geometry slices are generated by the intersections of a unit cubeand the planes which are perpendicular to the light direction. There are roughly the following steps to complete theprocess. 5
PREPRINT - A
UGUST
17, 20201. Calculate the min/max distance of unit cube vertices by doing a dot product of each unit cube vertex V [ i ] withthe light direction vector L ;2. Calculate all the possible intersections parameter λ of the plane perpendicular to the light direction with alledges of the unit cube going from the nearest to farthest vertex, using min/max distances from step 1;3. Find the intersection points of each slice using the intersection parameter λ (from step 2) to move in the lightdirection; Three to six intersection vertices will be generated;4. Store the intersection points in the specified order to generate triangular primitives;5. Update geometry slices.The minimum distance is set as the first slice position, and the position increment is the substrate the max and mindistances divided by the total number of slices. The min/max distance and the total number of slices can be calculatedas shown Equation 8. Meanwhile, several parameters and the sample point can help to calculate the correct slice index. (cid:40) min = min ≤ i ≤ { L · V [ i ] } max = max ≤ i ≤ { L · V [ i ] } (8)Because render to texture (RTT) only supports 16 texture units in a shader, the whole geometry slices of volumetricdatasets cannot be stored and operated using a large number of independent texture units. A practical solution is to relyon a T exture DArray to save arbitrarily scalable geometry slices. However, this requires a step to read the renderedimage from the frame buffer and write it into the
LightBuf f er . So, the rendered images that are callback from theframe buffer are stored into the light buffer of texture2DArray by the slice index. (-0.5,-0.5,-0.5)0 1(0.5,-0.5,-0.5)2 (0.5,0.5,-0.5)3(-0.5,0.5, -0.5)4(-0.5,-0.5,0.5) 5 (0.5, -0.5,0.5)67 (0.5,0.5,0.5)(-0.5,0.5,0.5) (a) (b)
Illumination attenuation bufferSlice 01Slice 02Light source (c)
Figure 3: The
LightBuf f er at the viewpoint of the light source. (a) A unit cube; (b) The geometry slices; (c) Anexample of illumination attenuation slices.
The volume shadow of volumetric datasets can enhance the perception of shape and depth. Although the volumeillumination can be carried out by estimating the normal at the sampling point and using the illumination model in raycasting or implemented through the slice-based half-angle slicing technique, interactive volume illumination is difficultto calculate accurately through volume equation.In this paper, volume shadow is computed efficiently using the illumination attenuation buffer. For each sample of raycasting, the primary color of volumetric datasets is readily available. The problem here is how to use the
LightBuf f er to determine the illumination attenuation for each sample. First, each sample is required to find which slice of the
LightBuf f er it belongs to. And then, the sample
U V of the slice needs to be calculated.The slice location of the sample is determined by calculating the distance between the sample and the light source.For example (see
Figure 4 ), the red sample S in ray casting is located in one slice according to the process of6 PREPRINT - A
UGUST
17, 2020
Light
Camera min Smax
Ray
Figure 4: The index of attenuation buffer at a sample. The min/max distance of unit cube vertices are calculated bydoing a dot product of each unit cube vertex with the light direction vector.building illumination attenuation buffer. Therefore, the solution is calculated by the ratio method, and the index of DT extureArray is shown as in Equation 9.
Index = n ( L · S w − min ) max − min (9)with n is the number of slices, and S w is the world position of sample.The sample U V (Equation 10) of the slice needs to be calculated by the coordination space transform. Similarly to theshadow mapping algorithm, the shadow matrix is used to look up the light buffer, which is performed in the fragmentshader. Texture coordinates are computed by matrix transformation in which each vertex position of the object space ismultiplied by the shadow matrix to get the shadow texture lookup coordinates. The shadow matrix is multiplied by themodel-view projection matrices from the viewpoint of the light.
U V = L p ∗ L v ∗ E vi ∗ E mv ∗ V (10)7 PREPRINT - A
UGUST
17, 2020with V is the object-space vertex position. L p and L v are the projection matric and the view matrix from the viewpointof the light source. E mv and E vi are the model-view matrix and the view inverse matrix from the viewpoint of thecamera. • The shell scattering distributionThe discretized coefficient summation is approximate by a series of cuboid shells, as indicated in
Figure 5 ,where the number and size of the shells can be varied, for example, Sh , Sh , Sh . The scattering coefficientof the current sample is calculated by estimating each voxel in the shell.In the test of our algorithm, the current sample moves different sample steps along the x , y , and z axes toobtain neighboring voxels illumination. Different neighborhoods are given a certain weight to determinethe current sample illumination. A more extensive set of shells with varying diameters leads to better imagequality but requires more computational overhead; moreover, as few as shells are sufficient to reach an imagequality that is hardly distinguishable from individually sampling a large neighborhood. Sh Sh Sh Shells y z x
Figure 5: The shell distribution. • The cone scattering distributionThe cone scattering distributionn is sampled in a cone by the the main light ray as shown in
Figure 6(a) . Thecone size is determined by sample length c n at light direction L dir and slice distance d , which is the radiuscentered on c n in a plane perpendicular to L dir , as shown in Figure 6(b) . Each sample v (as shown Equation12) in a cone is calculated according axis projection A proj of c n (such as c , c ). Meanwhile, the A proj isRodrigues’ ratation formula for rotating the projection base B proj by an angle θ (as shown Equation 11) andthe B proj is determined by four points of sample point s , light source, the viewer point and projection position v on the same plane. 8 PREPRINT - A
UGUST
17, 2020 (a) (b) SS Screen
Figure 6: The cone scattering distribution. A proj = B proj cos θ + ( L dir × B proj ) sin θ +( L dir · B proj ) L dir (1 − cos θ ) (11) v = Π A proj ( c n ) = c n · A proj A proj · A proj A proj (12) Figure 7 (a)(e) . The obtained samples during the ray traversal in ray casting are composited using the front-to-backalpha compositing until the ray exits the volumetric datasets. Meanwhile, the normal at the sample point is estimated tocarry out the local illumination calculation using the Phong illumination model, as shown in
Figure 7 (b)(f) . Althoughthe local volume illumination is adequate for shading surfaces, it does not provide sufficient lighting characteristics forthe visual appearance. The global volume illumination, half-angle slicing for volume shadow (
Figure 7 (c)(g) ) andour slice-based ray casting (
Figure 7 (d)(h) ) presents detailed lighting characteristics in contrast to the local volumeillumination. For example, the global illumination can see the structure on the wall of the hole, and slice-based raycasting can have a clearer appearance, especially when fewer slices are compared to the half-angle slicing.In volume ray casting rendering, as the sample density increases, the results are more fined less artifact. The sampledensity / and / are one unit and half unit size of the volumetric dataset Engine (256 ) , respectively. It alsocan be seen from the Figure 7 that as the number of slices increases, the effects of both half-angle slicing and slice-basedray casting are better for the illumination effects. However, the half-angle slicing requires a large number of passes9
PREPRINT - A
UGUST
17, 2020 (a d:1/256 ) (b d:1/256 ) (d d:1/256 s:64 )(c s:64 )(e d:1/512 ) (f d:1/512 ) (h d:1/512 s:128 )(g s:128 ) Figure 7: (a)(e)
Volume rendering by ray casting; (b)(f)
Phong lighting volume shadow by ray casting; (c)(g)
Volumeshadow by half-angle slicing; (d)(h)
Volume shadow by slice-based ray casting. The images of (c)(d)(g)(h) at thebottom left are the illumination attenuation of the last slice. ‘d’ is the sample density in ray casting and ‘s’ is the numberof slices of the volumetric datasets
Egnine ( ). The red lines are the unit cube boundary. The white point at the topright is the light position.during the whole volume illumination rendering; thus, the draw calls are huge to result in high-performance overhead,as shown in Figure 8 . Our slice-based ray casting can supply high-performance volume illumination rendering. It onlyincreases the computing of initial building illumination buffer and per sample shadow computing in ray casting. Andthe slice-based ray casting will not have much performance fluctuation as the number of slices increases.Moreover, our slice-based ray casting allows for flexible performance, varying the number of slices and per-slice imageresolution
Figure 10 . As the number of slices and image resolution increases, the effects of volume illumination arebetter, but the performance overhead is increased, as shown in
Figure 9 . In the test, if the number of slices exceeds 256and the image resolution exceeds 512x512, the performance will decrease sharply. So, the parameters of slice-based raycasting should be chosen to meet flexible performance according to practical applications and hardware performance.In addition to slice-based ray casting, which can quickly calculate volume shadow illumination, the illuminationscattering effects can be expressed by approximating the volume illumination equation. The shell and cone sampledistribution be used to generate the soft shadow, as shown in
Figure 11 . The cone effects are better than shell becausethe cone is integrated with the light direction rather than the surround voxels of the sample point in the shell. Meanwhile,cone and shell performance is related to the number of samples. As shown in
Figure 12 , the shell uses six neighborhoodsample points, and the cone is eight sample points. So the cone performance is slightly higher than the shell, but thespecular illumination much better.In these tests, some image effects are somewhat dark, which is related to the cumulative accumulation of illuminationattenuation. Therefore, as the number of slices increases, multiplying each slice by (1 + α ) n (The α is an alpha valueof the current slice, and the n is an adjustable factor of illumination attenuation) compensates for the fact that more andmore slices are over-dark due to excessive accumulation of illumination attenuation.10 PREPRINT - A
UGUST
17, 2020 a b c d e f g h0,001,002,003,00 P e r f o r m a n c e ( m s ) Row 1 Row 2
Figure 8: The performance contrast of four methods (no shadow, Phong lighting, half-angle slicing (HAS) and slice-based ray casting (SBRC)) in
Figure 7 and all the performance (ms) are averages over time. Row 1 is a, b, c, d. Row 2is e, f, g, h.
In short, the paper proposes a flexible performance volume illumination rendering that combines illumination attenuationslice from the light source and effective scattering coefficients in ray casting. The method can quickly obtain volumeshadows and better illumination effects in contrast to the local volume illumination or other similar slice-based globalvolume illumination. In comparison to different slice-based volume illumination algorithms such as half-angle slicing,slice-based ray casting can improve the rendering performance for volume illumination. Meanwhile, the scalabilityof slice-based ray casting can also be flexibly applied to more volumetric datasets and interactive volume renderingapplications. However, there are also some problems with slice-based ray casting to represent the better volumeillumination effects, such as the construction of better illumination attenuation slices and the calculation of better shelland cone scattering coefficients.In the future, we will further optimize and improve the slice-based ray casting method to form a more sophisticated andmore valuable volume illumination method. Meanwhile, combined multi-slice per pass, which reduces the performanceoverhead of building illumination attenuation buffer, provides a more flexible and scalable framework of volumeillumination rendering.
References [1] Johanna Beyer and Markus Hadwiger. Gpu-based large-scale scientific visualization. In
SIGGRAPH Asia 2018Courses , pages 1–217. 2018.[2] Finian Mwalongo, Michael Krone, Guido Reina, and Thomas Ertl. State-of-the-art report in web-based visualiza-tion. In
Computer graphics forum , volume 35, pages 553–575. Wiley Online Library, 2016.[3] Tomasz Hachaj. Real time exploration and management of large medical volumetric datasets on small mobiledevices—evaluation of remote volume rendering approach.
International Journal of Information Management ,34(3):336–343, 2014.[4] Jose M Noguera and J Roberto Jimenez. Mobile volume rendering: past, present and future.
IEEE transactions onvisualization and computer graphics , 22(2):1164–1178, 2015.11
PREPRINT - A
UGUST
17, 2020 Figure 9: The top is three resolution of 128x128, 256x256, and 512x512. The left is the number of slices of 64, 128,and 256. The images of at the bottom left are the illumination attenuation of the last slice.12
PREPRINT - A
UGUST
17, 2020 P e r f o r m a n c e ( m s )
64 128 256
Figure 10: The flexible performance contrast in three different resoution of 128x128, 156x256 and 512x512 and threethe number of slices of 64, 128, 256 in
Figure 8 . NoShadow VolumeShadow ShellFootEngine Cone
Figure 11: The contrast of volume illumination for the shell and cone sample distribution. The engine and foot arethe sizes and the number of slices of illumination is 128. The images of at the bottom left are the illuminationattenuation of the last slice. 13 PREPRINT - A
UGUST
17, 2020 P e r f o r m a n c e ( m s ) Engine Foot
Figure 12: The performance contrast in
Figure 8 .[5] Claudia Hänel, Benjamin Weyers, Bernd Hentschel, and Torsten W Kuhlen. Visual quality adjustment for volumerendering in a head-tracked virtual environment.
IEEE transactions on visualization and computer graphics ,22(4):1472–1481, 2016.[6] Daniel Jönsson, Erik Sundén, Anders Ynnerman, and Timo Ropinski. A survey of volumetric illuminationtechniques for interactive volume rendering. In
Computer Graphics Forum , volume 33, pages 27–51. WileyOnline Library, 2014.[7] M El Seoud and Amr S Mady. A comprehensive review on volume rendering techniques. In
Proceedings of the2019 8th International Conference on Software and Information Engineering , pages 126–131. ACM, 2019.[8] Udeepta D Bordoloi and H-W Shen. View selection for volume rendering. In
VIS 05. IEEE Visualization, 2005. ,pages 487–494. IEEE, 2005.[9] Ikits Milan, Joe Kniss, Lefohn Aaron, and Hansen Charles. Chapter 39. volume rendering techniques. [EB/OL]. https://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch39.html .[10] Jens Kruger and Rüdiger Westermann. Acceleration techniques for gpu-based volume rendering. In
IEEEVisualization, 2003. VIS 2003. , pages 287–292. IEEE, 2003.[11] Simon Stegmaier, Magnus Strengert, and Thomas Klein. A simple and flexible volume rendering framework forgraphics-hardware-based raycasting. In
Proceedings Conference on Volume Graphics , pages 187–195, 2005.[12] Philippe Lacroute and Marc Levoy. Fast volume rendering using a shear-warp factorization of the viewingtransformation. In
Proceedings ACM SIGGRAPH , pages 451–458, 1994.[13] David Laur and Pat Hanrahan. Hierarchical splatting: A progressive refinement algorithm for volume rendering.In
Proceedings ACM SIGGRAPH , pages 285–288. ACM Press, 1991.[14] Klaus Engel, Martin Kraus, and Thomas Ertl. High-quality pre-integrated volume rendering using hardware-accelerated pixel shading. In
Proceedings ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware ,pages 9–16, 2001.[15] Randima Fernando et al.
GPU gems: programming techniques, tips, and tricks for real-time graphics , volume590. Addison-Wesley Reading, 2004.[16] Joe Kniss, Simon Premoze, Charles Hansen, and David Ebert. Interactive translucent volume rendering andprocedural modeling. In
Proceedings IEEE Visualization , pages 109–116, 2002.14
PREPRINT - A
UGUST
17, 2020[17] Klaus Engel, Markus Hadwiger, Joe M. Kniss, Christof Rezk-Salama, and Daniel Weiskopf.
Real-Time VolumeGraphics . AK Peters, 2006.[18] Qi Zhang, Roy Eagleson, and Terry M Peters. Volume visualization: a technical overview with a focus on medicalapplications.
Journal of digital imaging , 24(4):640–664, 2011.[19] Philipp Schlegel, Maxim Makhinya, and Renato Pajarola. Extinction-based shading and illumination in GPUvolume ray-casting.
IEEE Transactions on Visualization and Computer Graphics , 17(12):1795–1802, December2011.[20] Yubo Zhang and Kwan-Liu Ma. Lighting design for globally illuminated volume rendering.
IEEE transactions onvisualization and computer graphics , 19(12):2946–2955, 2013.[21] Yubo Zhang and Kwan-Liu Ma. Lighting design for globally illuminated volume rendering.
IEEE Transactionson Visualization and Computer Graphics , 19(12):2946–2955, 2013.[22] Bernhard Preim, Alexandra Baer, Douglas Cunningham, Tobias Isenberg, and Timo Ropinski. A survey ofperceptually motivated 3d visualization of medical image data. In
Computer Graphics Forum , volume 35, pages501–525. Wiley Online Library, 2016.[23] Mathias Schott, Vincent Pegoraro, Charles Hansen, Kévin Boulanger, and Kadi Bouatouch. A directional occlusionshading model for interactive direct volume rendering. In
Computer Graphics Forum , volume 28, pages 855–862.Wiley Online Library, 2009.[24] Veronika Šoltészová, Daniel Patel, Stefan Bruckner, and Ivan Viola. A multidirectional occlusion shading modelfor direct volume rendering. In
Computer Graphics Forum , volume 29, pages 883–891. Wiley Online Library,2010.[25] Daniel Patel, Veronika Šoltészová, Jan Martin Nordbotten, and Stefan Bruckner. Instant convolution shadows forvolumetric detail mapping.
ACM Transactions on Graphics (TOG) , 32(5):1–18, 2013.[26] Patric Ljung, Jens Krüger, Eduard Gröller, Markus Hadwiger, Charles D. Hansen, and Anders Ynnerman. State ofthe art in transfer functions for direct volume rendering.
Computer Graphics Forum , 35(3):669–691, June 2016.[27] Bo Ma and Alireza Entezari. Volumetric feature-based classification and visibility analysis for transfer functiondesign.
IEEE transactions on visualization and computer graphics , 24(12):3253–3267, 2017.[28] Will Usher, Jefferson Amstutz, Carson Brownlee, Aaron Knoll, and Ingo Wald. Progressive cpu volume renderingwith sample accumulation. In
Proceedings of the 17th Eurographics Symposium on Parallel Graphics andVisualization , pages 21–30. Eurographics Association, 2017.[29] Johanna Beyer, Markus Hadwiger, and Hanspeter Pfister. State-of-the-art in GPU-based large-scale volumevisualization.
Computer Graphics Forum , 34(8):13–37, December 2015.[30] Sebastian Weiss, Mengyu Chu, Nils Thuerey, and Rüdiger Westermann. Volumetric isosurface rendering withdeep learning-based super-resolution. arXiv preprint arXiv:1906.06520 , 2019.[31] Neda Rostamzadeh, Daniel Jönsson, and Timo Ropinski. A comparison of volumetric illumination methods byconsidering their underlying mathematical models. In
Proceedings of SIGRAD 2013; Visual Computing; June13-14; 2013; Norrköping; Sweden , number 094, pages 35–40. Linköping University Electronic Press, 2013.[32] Nelson Max. Optical models for direct volume rendering.