Interactive distributed cloud-based web-server systems for the smart healthcare industry
IInteractive distributed cloud-based web-server systemsfor the smart healthcare industry
Kondybayeva Almagul Baurzhanovna National University of Science and Technology MISiS a [email protected] Abstract
Subject.
This work is dedicated to the questions of the contemporary medicalimage visualization, the architecture design of the cloud server systems and theusing of methods for the .DICOM data representation for the distributed smarthealthcare industry systems.
Purpose.
In modern medicine and biology, in the pace of research along withthe objective need for a constant increase, there is a sharp necessity of the three-dimensional data representation [1]: requiring high-performance methods of thethree-dimensional visualization, processing, decomposition, reconstruction andanalysis. In many ways, the reason for the sharp growth is associated withthe development of the computer technologies (from quantity into quality): thequantitative characteristics of the devices and computers used, the emergenceof three-dimensional technologies for processing, visualization and research ofdata, and, on this basis, the creation of new three-dimensional methods for thehuman-machine interaction.
Research methodology.
This paper proposes a method for visualization usingdirect volumetric rendering based on the cubic spline interpolation on a rayemission technology. Modifications of these algorithms, based on a block de-composition, are proposed and still investigated. The server architecture is post-graduate in a National University of Science and Technology ”MISiS”, the Cyber-netics department, Russian Federation, Moscow, 4, Leninsky ave Preprint submitted to Future Generation Computer Systems May 5, 2020 a r X i v : . [ c s . OH ] A p r roposed as a cloud hypervisors server system for the grid processing of thedata required. Research results.
The well-known method of block decomposition of the med-ical data was studied: the idea of block searching was implemented, auxiliarystructures reducing the volume range were used to skip empty spaces (emptyspace leaping). The cloud architecture for the server processing was developedas well as the user interface was proposed.
Scope of application.
The work aims to investigate the possible contemporaryinteractive cloud based solutions in the fields of the applied medicine for thesmart Healthcare as the data visualization open-source free system distributedunder the MIT license.
Conclusions.
A comparative study of a number of the well-known implementa-tions of the Ray Casting algorithms was studied. A new method of numericalcalculus is proposed for calculating the volume – the method of spheres, as wellas a proposal for paralleling the algorithm on graphic accelerators in a linearlyhomogeneous computing environment using the block decomposition methods.For the artifacts control – algorithm of the cubic interpolation was used. Thecloud server architecture was proposed. The work is done as a part of the PhDthesis of the author for non-profit/non-commercial, educational/research onlyreasons under the MIT License.
Keywords:
Data Science for Smart Healthcare , healthcare, internet ofthings, computer science, interactive visualization, distributed, web-server,cloud, parallel, systems, scientific research, cloud technologies,communications, information, cloud healthcare systems, web-services,web-technologies, information systems, experimental research, DICOM, CT,computed tomography, medical imaging, volume render . Introduction
Introduction.
One of the modern highly informative methods in medical diag-nostics is the computed tomography technology (CT).
X-ray computed tomogra-phy or CT is an imaging process that reproduces cross-sectional image repre-senting the X-ray attenuation properties of the body. Three-dimensional recon-struction of CT image slices allows to visualize in three-dimensional space thelocalization of blood vessels, pathologies and other features, making this tech-nique useful as an interactive visualization tool for the Healthcare purposes. Inpresent days, the duration of the whole body computed tomography (with a slicelayer thickness of less than ≤ ∼ Multislice CT (MSCT) is a method for volumetric studies of theentire human body (due to the the resulting axial transverse tomography con-stitute a three-dimensional data array) allowing to perform any reconstructionof images, including multi-plane reformation, volumetric (if necessary, stereo)visualization, virtual endoscopy [3].
Aims of the study.
The object of research is spatial data science in the fields ofthe medicine and proper healthcare industry in computed tomography (CT) ofexisting types in .DICOM format. The subject of the study are methods andalgorithms of the high-quality visualization of medical spatial data, especiallymedical image tomography data, as well as methods and algorithms for morpho-logical representation of surfaces and features on GPU/CPU systems includingserver side cloud system architecture with the following steps:(a) to create the sustainable model of the homogeneous cloud based server ar-chitecture,(b) to create the reliable model of the web-server,(c) to create the representation for the volume data render tasks (includingmobile operating systems). 3 igure 1: The 3D volume render example made with the cubic spline interpolation
The objects of study are the models of synthesis systems’ images for the three-dimensional models of human-machine communication in real time in the anal-ysis of medical and biological spatial data, as well as the development and thestudy of models and architectures for the GRID computing and the cloud serverhypervisor implementations. This goal requires the following tasks: • to study the existing methods of the three-dimensional visualization inapplied medicine, and science in general, an analysis of approaches forimproving the quality and productivity of the 3D renderings, • to develop existing methods for their complementary application in condi-tions of budgetary GPUs and the creation of 3D visualization technologyfor mass medical applications, including:(a) modification of the method of the block decomposition of gigavoxel(more than ∼ voxels) data for visualization on the GPU, preserv-ing the possibilities and quality of visualization for not decomposed4ata (the possibility of using cubic visual interpolation, lighting, cast-ing shadows, skipping blank areas, different integration conditionsalong the beam, ...),(b) to develop methods to improve the performance of 3D visualizationand suppression in the various kinds of render defects (artifacts) ra-tional in GPU conditions,(c) to develop a method for quantifying visualization quality for achiev-ing the required quality and comparing the real (in compliance witha given quality) the effectiveness of the proposed methods. • to study and develop an online prototype of the software package thatimplements the proposed methods on modern parallel hardware GPU ar-chitectures, and experimentally investigate the effectiveness of methods. Render example.
The Figure 1 represents the 3D volume render data visual-ization made with the cubic spline interpolation used in the study which isconsidered to be used as a basic algorithm for the volume render process in thissystem.
2. Methodology
There are two main ways to visualize scalar field isosurfaces:(a) to restore isosurface geometry (usually in polygonal form). In this case,the well-known marching cubes method is usually used. For surfaces recon-structed by this method, there is an algorithm for their effective compres-sion. The main advantage of this approach is low hardware requirements: incontrast to the method of emitting rays, the process of rendering a polygonsurface does not impose high requirements on the GPU, in addition, thereis no need to store the initial field data for visualizing the surface. Anotheradvantage of the approach is access to polygons, for example, for analysis ofsurface morphology. About the disadvantages, the duration of restorationof isosurface geometry can be noted [4],5b) visualization using the method of emitting rays (the Ray Casting method)does not imply the restoration and preservation of the surface mesh in theRAM (the Random Access Memory). The method is to emit a beam foreach pixel in the image in order to find collisions of the beam with thesurface and determining the illumination of the surface at a given point,and thus determining the color of the pixel. Obviously, generating an imageof an isosurface by this method is a much more resource-intensive processthan rendering a polygonal surface, so the program to find collisions areusually performed on the GPU. In this case, the advantage is the lack ofthe need to restore polygons of the surface, allowing interactive adjustmentof the value for the isosurface. Often, on modern video cards, the methodof emitting rays is superior to the usual rendering of the polygonal mesh,both in quality and in performance. However, the need to store the originaldata array and high hardware requirements limit the use of this method [5].The ray tracing method and its modification, called the method of the emis-sion of the rays (Ray Casting – see Figure 2) allows to achieve the best qualityand informative volume rendering. In this method, the color of each pixel ofthe desired image is calculated and the corresponding beam then is generated;which is a point in the space (for example, the position of the observer) andthe direction of the ray. Moving in this direction with a certain step, the beamaccumulates the color of the pixel. An important advantage of the ray emissionmethod is that the algorithm is easy paralleling on the graphics processing unit(GPU) since each pixel of the desired image is processed independently of therest. Medical image size voxels can easily fit in GPU memory andthe most efficient storage for such data is a three-dimensional texture; since theGPU provides texture access caching and automatic trilinear interpolation ofdata when sampling at an arbitrary point in space [6]. In this work the cubicinterpolation is being used. The following method is proposed in this study:modification of the method of block decomposition of data in the Ray Castingalgorithm – different optimal sequence bypass blocks using volume optimized6uxiliary structures of the method skipping empty areas (empty space leaping),providing the ability to build local lighting and shadows combined with tablesof the previously integrated rendering. The task of three-dimensional visual-ization of scalar and vector fields in medical and scientific imaging in general:visualization of scalar fields defined on regular and irregular grids.
Figure 2: The Ray Casting algorithm
Voxel graphics.
The method is based on the display process of three-dimensionaltextures in space. Naturally, texture elements (voxels) are usually translucent.As a rule, scalar fields are visualized in this way. The method of direct volu-metric rendering, which is largely devoted to this work, is intended to visualizevoxels as a translucent medium [7].
Isosurfaces.
Isosurfaces are intended for visualization of three-dimensional scalarfields. Sometimes several isosurfaces are displayed at once; their color may de-pend on the field value [8]. Three-dimensional texture is an acceptable repos-itory for volumetric data rendering performed on graphic video cards usingOpenGL library extensions. However, there are restrictions on the size of thetexture: the maximum size is 5123 voxels [2] [3]. Using a block view of the datacircumvents this limitation. For programming on the GPU, the shader languageGLSL was chosen, because at the moment, the performance of implementationson CUDA and, especially, OpenCL is often inferior to shader implementations[9]. The main limitation of shaders is the lack of access to shared memory on7he GPU [2]. This access allows you to group the rays into packets (slabs).Using Slab-based rendering allows decomposition in the image space, groupingthe rays into packets with a shared shared memory, which cannot be accessedfrom shaders [10]. When decomposing data, different data blocks are writteninto different three-dimensional textures. Textures are the same size, with theexception of those that capture only part of the source data. Using blocks ofdifferent sizes and covering only the visible part of the data can give an addi-tional performance boost and save GPU memory. The texture map method alsoreduces the size of the used GPU memory [2]. In this implementation, there isa limitation to excluding completely empty blocks and fitting the bounding boxfor each block to the visible voxels. To avoid artifacts at the junction, adjacentblocks must overlap at least one voxel thick. This will be enough if trilinearinterpolation is used in the rendering algorithm when sampling values from thedata. In this implementation, the blocks overlap by a thickness of three voxelsis used. It is used due to the reasons that, firstly, for the Fong local lightingmodel [11], to calculate the gradient, for which it is necessary to make additionalsamples from neighboring voxels, and secondly, to use cubic interpolation dur-ing the sampling instead of trilinear, including calculating the gradient, whichextends the radius of the sample by one more voxel. There are also approachesfor stitching data blocks having different spatial resolutions. Figure 3 repre-
Figure 3: Decomposing data into overlapping blocks structure sents decomposing data into overlapping blocks. Here are the blocks overlap bya thickness of two voxels. Blocks marked in red and yellow have regular size(8 voxels in this case). The remaining blocks capture the remains data and are8educed in size if one wants to visualize three-dimensional discrete data array(tomogram) by volume rendering method, then to each possible data value onemust set certain optical properties. Transfer function
T(x) (transfer function),or palette, in the simplest case, matches any color value and transparency (usu-ally in practice, the value is stored in memory opacity, and . Postclassification - the operation scheme of the basisalgorithm on render in which coloring (classification) of a point in space occursafter interpolating the value selected at the current point of the beam, i.e. coloris determined as the interpolated data value.
Preclassification - the operationscheme of the volume rendering in which the color of a space point is defined ascolor interpolation nearby voxels that are classified (as received color) accordingto their values, i.e. coloring occurs before interpolation [2]. If one visualizes thearray as a lot of multi-colored translucent cubes, i.e. if interpolation betweendata cells (voxels) is not used, then the result of the 3D rendering will con-tain crude artifacts. As for smoothness of visualization it is necessary to useinterpolation between the nodes of the source (data) grid; and with it two dif-ferent approaches may arise. In the most widespread cases the postclassificationmethod is being used [2] [12].The optical properties of a point in space is first calculating the interpolatingvalue of V data at a point (usually using trilinear interpolation [2]), the value ofthe transfer function is considered as point V , i.e. T(V) even if the interpolatedvalue of V does not occur at all in the histogram of the source data [ ? ] On thecontrary, in preclassification process, all voxels are painted at first, but beyondoptical properties, the arbitrary point in space is considered as the result ofthe interpolation between the optical properties (color with transparency) ofvoxels, i.e. classification occurs before interpolation, not after [7] [5] [12] [13].In practice, a model with postclassification is more often used, because it gives:(a) significantly improving visualization quality: significantly less noticeablestepping data comparing to the preclassification with tricubic interpolationfor rendering (preclassification gives the same artifacts),9b) better performance or better resource intensity: on practice for preclassifi-cation method each voxel and its optical properties must be either storedin the memory, or calculate these voxels’ properties during rendering.In the first case instead of a 12-bit array with the source data in the GPU oneshall need to load a 32-bit array of the same dimensions, which will store thecolor and voxel transparency instead of the original data values. And whenchanging the transfer function it is needed to re-calculate and load the entirearray, instead of loading a new transfer function. In the second case one hasto make samples of eight voxels (in the case of trilinear interpolation) calculatetheir colors and transparency and then find the interpolated value of color andtransparency for the point. Whereas in postclassification one makes one sam-pling from a point using trilinear filtering of textures, which practically giveshardware for free trilinear interpolation [14] [15] [16]. In this work the preference Figure 4: Comparison of rendering results: 1) preclassification + trilinear interpolation; 2)preclassification + tricubic interpolation; 3) postclassification + trilinear interpolation; 4)postclassification + tricubic interpolation. is given to the tricubic spline interpolation using postclassification algorithm dueto the results from the comparison presented on the Figure 4.
Analysis.
Analysis of the problems of implementing the method of block de-composition under conditions of maintaining high quality 3D visualization.10 ros. :(a) the ability to load large amounts of data. In addition, when divided intoblocks of 643 voxels, one can save the blocks that do not contain usefulinformation. For example, for CT scans, one can usually drop ∼
40 % ofthe blocks, due to the reason that these blocks contain only air. The size ofthe visualized data is limited only by the capacity of the video card (in thecase of a consumer video card with a memory of only 1 GB can visualize anarray of data up to bytes in size).(b) decomposition significantly improves visualization performance, due to thereason that during the rendering of a single small block, the selection comesfrom a small texture, which is much faster than selecting from a large tex-ture. Thus, blocks are displayed in order from the observer, some of theblocks can be closed by previously drawn blocks. The early beam com-pletion strategy is also applicable to block data rendering. In addition tosaving GPU memory, it also saves space that you have to trace with rays.As in the case of saving memory, saving space depends on the choice of blocksize and, as a rule, the smaller the size, the greater the saving. In the blockpresentation of data, the strategy of skipping empty areas is applicable withgreater efficiency, since the data is divided into small arrays, each of whichis easier to fit the bounding box than to the data array as a whole. Block-ing provides additional opportunities for paralleling rendering to multipleGPUs, which is especially important for the client-server architecture of thevisualizer.(c) despite the need to mix the rendering results of various blocks in a specificorder, each individual block can be rendered independently of the others onany GPU. Computing and data can be distributed across different GPUs.There are works on distributed visualization by the method of volume ren-dering on cluster systems, which is especially important for scientific vi-sualization, when as a result of numerical modeling huge data arrays areobtained, which cannot be transferred to one local machine.11 ons. :(a) the main drawback of the block representation is the complexity of the ren-dering algorithm: when rendering a block, nothing is known about data fromneighboring blocks (except for the overlap layer with neighboring blocks);if for each block to provide such access there is a need to access to the re-maining blocks that process is able to negate the performance gain. Thus,for example, the implementation of casting shadows and various nonlocallighting techniques [7] [5] [13] [14] [15] [12] [17] [18] [19] [20] [21], includ-ing techniques requiring the generation of secondary rays [22] [23] [24] [25],becomes more complicated [26] [27].(b) It is worth to highlight the complication of the multi-volume rendering al-gorithm, in which it is necessary to perform joint rendering of two or morespatial data arrays overlapping in space, each of which has a block repre-sentation,(c) block overlapping means that the voxels on the floors will be duplicated,so if the partition is too small (with block sizes less than 323), the GPUmemory savings will be ineffective,(d) if the block is too small, the total time spent on switching between theblocks noticeably increases; after rendering the next block, it is necessaryto copy the rendering results from the texture (which was performed onrendering) into the texture from which the reading will be carried out duringthe rendering of the following blocks. Even when copying only the area ofthe screen where the block was rendered, the performance already dropssignificantly when the regular block size is 323 voxels [28].The work on slab-based rendering [29] is very interesting as a study demonstrat-ing not only a new method of slicing blocks along a beam to reduce misses fora ray packet in shared memory, yet also demonstrating inconstancy of gain andits limited amplitude ( ∼ he method of Spheres. The method of spheres for the analysis of the morphol-ogy of complex biological objects in
SVR values:
Surface-to-Volume Ratio , SVR or Area-to-Volume Ratio . It is one of the most important characteristics of allbiological objects from the cell to the animal as a whole. This value characterizesthe intensity of exchange between the biological object as a whole with the envi-ronment and has a characteristic dependence on the radius (R) of the object as1 R . With the same measure, you can approach the characterization of the localproperties of the object. Therefore, it is very important to have a quantitativelocal characteristic morphology of the studied object in SVR values (
SV R = SV )in order to discover, understand and explore the functions performed by its partsand organs. This is especially relevant to a challenge with brain cells such asastrocytes, due to the extraordinary complexity of their shape and relativelypoor knowledge[11]. However, to date has not formed sustainable methodologyfor calculating local SVR s. In addition, there is a problem. Settlements in anacceptable time, as computational complexity of the analysis task local
SVR isproportional to the square of the number of vertices or triangles ∼ ( O = n ),and astrocytes reconstructed from electron micrographs (a microscope with aresolution of units of nanometers) have in its composition hundreds of thousands n ∼ (10 ) triangles[28]. Surface-to Volume approach.
Let’s define a scalar field
SVR (X) in 3D spaceas following: let X is an arbitrary point in space, G is the surface of the objectunder study (in our case it is a closed polygonal surface). Then construct asphere Ω of radius R with center at point X . Let the intersection of Ω and G benonzero, then let S be the area the surface of an object lying inside the sphere Ω(denote this surface by S ), and V as it is the intersection volume of the domains V G and V Ω bounded by G and Ω (denote this is the locus of points as Θ. Thenthe value of the field SVR (X) is calculated as (1) and presented on the Figure3:
SV R ( X ) = SV (1)13n the case V = 0, i.e. if Ω ∩ G = 0, then the field at the point X is not defined.As a result, one defines the method of spheres as the method that consists in Figure 5: The cross-sectional diagram of the sphere Ω for calculating
SVR (X) : a) bold thepart of the surface G , the area of which is calculated, the area, volume which is calculated; b)approximate calculation of the area of the triangle inside the sphere Ω by the set of randomsamples of points on a triangle. calculating the field SVR (X) at points X belonging to the boundary of theinvestigated part of the object. A significant influence on the field values inthis method is a choice the radius of the sphere. To calculate SVR (X) , inaccordance with the formulated by definition (1), it is necessary to calculate thearea S and the volume of only the part G located inside the sphere Ω. Area S is the sum of the areas of the triangles located entirely inside Ω, and fromthe cut-off part of the areas of triangles intersected by a sphere Ω. The area ofthe part of the triangle inside Ω (see Figure 5 – b) is calculated by the MonteCarlo method [30]. In a similar way, the volume G inside Ω. However, thereare difficulties that should be considered. Cloud architecture.
Cloud data storage - an online storage model in which datais stored on numerous servers distributed on the network and provided for useby customers, mainly by a third party. In contrast to the model for storing dataon its own dedicated servers purchased or leased specifically for such purposes,the number or any internal structure of the servers is generally not visible tothe client. Data is stored and processed in the so-called cloud, which represents,from the user’s viewpoint, as one large virtual server. Physically, such serverscan be located remotely from each other geographically, up to the location ondifferent continents. 14 ypes of Cloud Computing.
The concept of a cloud computing is often associ-ated with such service-providing (i.e. everything as a service) technologies, suchas:(1) ”Infrastructure as a Service” (”Infrastructure as a Service” or ”IaaS”),(2) ”Platform as a Service” (”Platform as a Service”, ”PaaS”),(3) ”Software as a Service” (”Software as a Service” or ”SaaS”).The solution in the study is developing as an open-source solution based on aSaaS model.SaaS is an application deployment model that involves delivering an appli-cation to the end user as an on demand service. Access to such an application iscarried out through the network, and most often through an Internet browser.In this case, the main advantage of the SaaS model for the user is the lack ofcosts associated with installing, updating and maintaining the use of the equip-ment and software running on it. In this SaaS model presented on the Figure 6includes the following points:(1) the application is studied to be adapted for the remote use,(2) multiple clients can use one application,(3) the application can be upgraded by developers smoothly and transparentlyto users.In fact, SaaS software can be seen as a more convenient and cost-effectivealternative to internal information systems. The development of SaaS logic isthe concept of WaaS (Workplace as a Service - workplace as a service). That is,the client receives at his disposal a virtual workstation fully equipped with allthe necessary software. Here on the Figure 6 the approximate cloud architectureis presented.
3. Results
There is an arising significant complication of the visualization algorithmscoming from the block decomposition of the data. In practice, it’s necessity15 igure 6: The cloud based server architecture of the SaaS solution inevitably arises not only for visualizing large arrays but also to speed up ren-dering, especially on graphics cards with low performance. Regrading this, themethod of the block decomposition of data for volume visualization based on theray emission algorithm (Ray Casting method) is still being studied. Based onthe study, a method for quantifying
DVR artifacts in a method for ray emission(Ray Casting method) caused by an insufficiently short beam pitch in the formsimilar to the peaks of the signal-to-noise ratio (PSNR - Peak Signal-to-NoiseRatio) is still being developed. According to the results from the Ray Cast-ing processing in volume render: using the
PSNR ratio brings the noise levelto a logarithmic scale in dB, which values from 30 to 40 dB corresponding tothe acceptable image synthesis quality. The causes of artifacts arising from thetrilinear interpolation are investigated further as much as the cubic interpola-tion (and either their approaches to the quantitative assessment involving the16ssessment of structural similarity of two
SSIM images). A new method usingthe postclassification for eliminating volumetric errors (artifacts) – visualiza-tions characterized by using pre-integrated rendering in a virtual data samplingmethod when integrated along the beam, which is optimal for a class of visu-alization cases where tricubic interpolation and local lighting is proposed, stillinvestigated. A study was conducted on the 3D visualization methods based onthe Ray Casting (RC) algorithm. Although no RC algorithm was found in thecourse of the experiments that was optimal in terms of quality and performanceunder any visualization conditions, the evaluation method showed the neces-sity of optimization: due to the reason that the not optimized
UDVR approachis inferior to other approaches in any visualization conditions, despite its highperformance.Applying the approaches to volume render of medical image data studied inthis work, it was managed to design the 3D visualization software prototype andachieve interactive and high-quality volumetric visualization of medical image ofabout 2 GB in size (512x512x5382 voxels) on the localhost machines. Also, thedesign for the cloud based grid machines was developed. The solution mostlyimplements the hypervisor technologies for the grid server computing. A methodfor implementing a fully functional system based on GRID cloud computingservers is proposed, examples of client-side are shown on the Figures 7, 8, 9.
4. Discussion
Today there are variety of the approaches developed in the works of the fol-lowing scientists: Klaus Engel, Bernhard Kainz, Daniel Ruijters, Stefan Guthe,Johanna Beyer, Vincent Vidal, Markus Hadwiger, Daniel Weiskopf, ThomasErtl, Wolfgang Strasser, Byeonghun Lee, Jihye Yun, Jinwook Seo , Yeong-GilShin, Bohyoung Kim, Byonghyo Shim, and others, allowing to process the real-time volumetric visualization using GPU computing. The world market offersseveral tomography software systems that provide fusion and three-dimensionalvisualization of tomograms. These commercial systems use the most productive17 igure 7: The cloud based online version (prototype) versions of the commercial 3D visualizer systems [1]. The same can be saidabout the growth of information flow and the need to build productive meth-ods for processing it in the high-energy physics. The most critical situation,due to the huge amount of data, is observed in three-dimensional processingof data from particle accelerator detectors with a resolution of a few picosec-onds ( ∼ − seconds) [31]. For various mathematical models, it is necessaryto visualize the obtained data in such a way that certain field properties arerevealed. For example, for the result of numerical simulation of unsteady fluidflows, these properties can be revealed:(1) dynamics of the velocity field,(2) the formation and decay of vortices,(3) vortex flows and shock waves.Of particular difficulty is the task of constructing animations for vector velocityfields (both steady and unsteady flows). Texture methods for solving this prob-lem allow to obtain high-quality results in the visualization of two-dimensionalflows. The pinnacle of their development are methods based on the construction18 igure 8: The cloud based online version (prototype) of a Motion Map for stationary flows. However, when trying to apply them tothe three-dimensional flows, a number of problems arise, associated primarilywith the high density of texture data, and also, importantly, with large com-putational costs. The bottleneck of the well-known three-dimensional texturemethods is the construction of interactive animation with high quality animatedpaintings. Due to the high computational complexity, it is possible to build onlya certain animation sequence for the purpose of its subsequent visualization. De-spite significant progress in solving the problems mentioned above, there are anumber of unsolved problems:(1) high-quality three-dimensional visualization of medical images today is tiedto the tomograph due to the high demands on the productivity of the work-station, yet it is not available to the ordinary clinician and, especially, themedical students. Thus, a transition to the online software for the mass ac-cessibility technology without the loss of visualization quality is necessary,(2) the volume of the medical image available for the 3D reconstruction on theGPU is limited by the size of the GPU memory (today for the mass officevideo cards that are for sale the volume is about ∼ igure 9: The mobile version (prototype) constant growth of data requires the removal of restrictions on their volumeand the construction of decomposition algorithms for parallel block dataprocessing while preserving all the capabilities and quality of visualizationon the side of user services,(3) despite the increase in productivity and quality of 3D visualization, there isno practice of quantitatively assessing the quality of visualization; as for thegaming technologies, for example, the so-called light distribution calculationmethod is used [32](4) There are several open and commercial programs for semi-automatic three-dimensional geometric reconstruction of cells, but there are no methodsand programs for automating detailed morphological analysis of cells, whilethe computational complexity of such analysis in SVR (Surface-to-VolumeRatio) values is proportional to the square of the number of vertices oftriangles O with characteristic values of approximately triangles anddeviation variance of the vortex data given under the volume render [10].20 oftware rights. All rights on the software are under the MIT License.
References [1] C. Vyas, C. Wheelock, Home health technologies medical monitoring andmanagement, remote consultations, eldercare, and health and wellness ap-plications: Global market analysis and forecasts, Tech. rep., Tractica (122015).[2] P. Calhoun, B. Kuszyk, H. D., al, Three-dimensional volume rendering ofspiral ct data: theory and method (1999) 745–764.[3] Y. Ding, Visual quality assessment for natural and medical image, Springer,Berlin, 2018.URL https://cds.cern.ch/record/2697996 [4] B. H. S. J. Flohr T., Stierstorfer K., S. S., New technical developments inmultislice ct, part 1: approaching isotropic resolution with sub-millimeter16-slice scanning. fortschritte auf dem gebiete der rontgenstrahlen und dernuklearmedizin (2002) 174:839845.[5] M. Hadwiger, C. Sigg, H. Scharsach, K. Bhler, M. Gross, Real-time ray-casting and advanced shading of discrete isosurfaces 24 (3).[6] K. Engel, M. Hadwiger, J. Kniss, C. Rezk-Salama, Real-Time VolumeGraphics, A.K. Peters, Ltd., New York, USA, 2004.[7] F. Hernell, P. Ljung, A. Ynnerman, Local Ambient Occlusion in Direct Vol-ume Rendering, IEEE transactions on visualization and computer graphics16 (4).[8] M. Donatelli, S. Serra-Capizzano, Computational methods for inverse prob-lems in imaging, Springer INdAM series, Springer, Cham, 2019. doi: .URL https://cds.cern.ch/record/2704061 [9] Y. Dobashi, S. Kaji, K. Iwasaki, Mathematical insights into advanced com-puter graphics techniques, Mathematics for industry, Springer, Singapore,2018.URL https://cds.cern.ch/record/2710749 [10] S. Margret Anouncia, U. K. Wiil, Knowledge computing and its applica-tions: knowledge computing in specific domains, Springer, Singapore, 2018.URL https://cds.cern.ch/record/2697932 [11] Y. Donchin, A. Rivkind, J. Bar-Ziv, J. Hiss, J. Almog, M. Drescher, Util-ity of postmortem computed tomography in trauma victims., Journal oftrauma.[12] M. Tarini, P. Cignoni, C. Montani, Ambient occlusion and edge cueing toenhance real time molecular visualization, IEEE Trans. Visualization andComputer Graphics 12 (5).[13] M. Sattler, R. Sarlette, G. Zachmann, R. Klein, Hardware-accelerated am-bient occlusion computation, Proc. Conf. Vision, Modeling, and Visualiza-tion.[14] P. Shanmugam, O. Arikan, Hardware accelerated ambient occlusion tech-niques on gpus, Proc. Conf. Interactive 3D Graphics and Games.[15] A. Stewart, Vicinity shading for enhanced perception of volumetric data,Proc. IEEE Conf. Visualization.[16] D. H. Besset, Object-oriented implementation of numerical methods: anintroduction with Java and SmallTalk, [s.n.], San Francisco, CA, 1983.URL https://cds.cern.ch/record/768962 [17] U. B. U, R. Ratering, Adding shadows to a texture-based volume renderer,Proc. IEEE Symp. Volume Visualization.2218] S. Z. S, A. Inoes, G. Kronin, An Ambient Light Illumination Model, Ren-dering Techniques, G. Drettakis and N. Max, eds., Springer-Verlag Wien,1998.[19] P. Desgranges, K. Engel, G. Paladini, Gradient-free shading: A new methodfor realistic interactive volume rendering, Proc. Conf. Vision, Modelling,and Visualization.[20] F. H. F, P. Ljung, A. Ynnerman, Efficient ambient and emissive tissue illu-mination using local occlusion in multiresolution volume rendering, Proc.Eurographics/IEEE-VGTC Symp. Volume Graphics.[21] M. D. Fairchild, Color Appearance Models, [Addison Wesley LongmanInc.], Boston, MA, 1998.[22] C. Wyman, S. Parker, P. Shirley, C. Hansen, Interactive display of isosur-faces with global illumination, IEEE Trans. Visualization and ComputerGraphics 12 (2).[23] C. Rezk-Salama, Gpu-based monte-carlo volume raycasting, Proc. Conf.Pacific Graphics 12 (2).[24] D. Weiskopf, K. Engel, T. Ertl, Interactive clipping techniques for texture-based volume visualization and volume shading, IEEE Trans. Visualizationand Computer Graphics 9 (3).[25] M. Magnor, K. Hildebrand, A. Lintu, A. Hanson, Reflection nebula visual-ization, Proc. IEEE Conf. Visualization.[26] J. Kniss, S. Premoze, C. Hansen, D. Ebert, Interactive translucent volumerendering and procedural modeling, Proc. IEEE Conf. Visualization.[27] H. W. Jensen, Realistic Image Synthesis Using Photon Mapping, A.K. Pe-ters, Ltd., Natick, MA, 2001. 2328] N. Gavrilov, High performance visualization and morphological analysisof three-dimensional data in medicine and biology, Ph.D. thesis, NizhnyNovgorod State University N.I. Lobachevsky (2013).[29] J. Mensmann, T. Ropinski, H. K, An advanced volume raycasting tech-nique using gpu stream processing, In Computer Graphics Theory andApplications.[30] I. M. Sobol, The Monte-Carlo method, Nauka, Moscow, 1968.URL http://math.ru/lib/plm/46 [31] G. Cheung, F. Dreyer, S. Borghi, M. Gersabeck, Monte Carlo simulation de-velopment and studies of CP violation in D → π − π + π decays, Tech. Rep.LHCb-INT-2012-030. CERN-LHCb-INT-2012-030, CERN, Geneva (Nov2012).URL https://cds.cern.ch/record/1495074https://cds.cern.ch/record/1495074