Inspection of histological 3D reconstructions in virtual reality
Oleg Lobachev, Moritz Berthold, Henriette Pfeffer, Michael Guthe, Birte S. Steiniger
IInspection of histological 3D reconstructions invirtual reality
Oleg Lobachev a,b,c, ∗ , Moritz Berthold a,1 , Henriette Pfe ff er d , Michael Guthe a ,Birte S. Steiniger d a University of Bayreuth, Visual Computing, Bayreuth, Germany b Hannover Medical School, OE 4120, Carl-Neuberg-Straße 1, 30625 Hannover, Germany c Leibniz-Fachhochschule School of Business, Expo Plaza 11, 30539 Hannover, Germany d Philipps-University Marburg, Anatomy and Cell Biology, Marburg, Germany
Abstract
3D reconstruction is a challenging current topic in medical research. We perform3D reconstructions from serial sections stained by immunohistological methods. Thispaper presents an immersive visualisation solution to quality control (QC), inspect, andanalyse such reconstructions. QC is essential to establish correct digital processingmethodologies. Visual analytics, such as annotation placement, mesh painting, andclassification utility, facilitates medical research insights. We propose a visualisation invirtual reality (VR) for these purposes. In this manner, we advance the microanatomicalresearch of human bone marrow and spleen. Both 3D reconstructions and original dataare available in VR. Data inspection is streamlined by subtle implementation detailsand general immersion in VR.
Keywords: virtual reality, 3D reconstruction, scientific visualisation, histological serialsections
1. Introduction
Visualisation is a significant part of modern research. It is important not only toobtain images, but to be able to grasp and correctly interpret data. There is always thequestion, whether the obtained model is close enough to reality. Further, the questionarises how to discern important components and derive conclusions from the model.The method we present in this paper is a typical example of this very generic problem ∗ Corresponding author
Email address: [email protected] (Oleg Lobachev)
URL: https://orcid.org/0000-0002-7193-6258 (Oleg Lobachev) Present address: BCM Solutions GmbH, Roteb¨uhlpl. 23, 70178 Stuttgart, Germany cbed
Licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Interna-tional licence. a r X i v : . [ c s . G R ] N ov pecimenspecimen digital copydigital copyVRVRvisual decisionsvisual decisions visualcomputing acquisitionsimulationsimulationvisual analyticsvisual analyticsconclusionsconclusions real-timerendering,tracking, etc.quality control,annotations,mesh painting the mediumcoloured mesh,decision on qualitymicroanatomicalinsights biological processing,registration, meshconstruction, etc.sample of alymphaticorgan reconstructedmeshes Figure 1: The general life cycle of visual computing (black) and our specific case (blue). The intensity of blueshading in the circle highlights the major focus of this paper. Blue arrows show facets of our implementation. statement, but it is quite novel in the details. We use virtual reality for quality control(QC) and visual analytics (VA) of our 3D reconstructions in medical research. Figure 1positions our tool in the visual computing methodology.3D reconstruction from histological serial sections closes a gap in medical researchmethodology. Conventional MRI, CT, and ultrasound methods do not have the desiredresolution. This is also true for micro-CT and similar methods. Even worse, there is noway to identify the structures of interest in human specimens under non-invasive imagingtechniques. In contrast, immunohistological staining provides a reliable method to markspecific kinds of molecules in the cells as long as these molecules can be adequatelyfixed after specimen removal. It is possible to unequivocally identify, e.g., the cellsforming the walls of small blood vessels, the so-termed endothelial cells. Only a thinsection of the specimen—typically about 7 µ m thick—is immunostained to improverecognition in transmitted light microscopy. If larger structures, such as microvesselnetworks, are to be observed, multiple sections in a series (‘serial sections’) need to beproduced. These series are necessary, because para ffi n sections cannot be cut at morethan about 30 µ m thickness due to mechanical restrictions. The staining solution canpenetrate more than 30 µ m of tissue. It is not possible up to now to generate focused2 -stacks from immunostained thick sections in transmitted light. Hence, the informationgathered from single sections is limited. Thus, registration is a must.For 3D reconstruction, serial sections are digitally processed after obtaining largeimages of each section with a special optical scanning microscope. The resolution istypically in the range 0.11–0.5 µ m / pixel, the probe may cover up to 1 cm . With a ourregistration method (Lobachev et al., 2017b) we produce stacks of serial sections thatare spatially correct. After some post-processing (e.g., morphological operations, butalso an interpolation Lobachev et al., 2017a), a surface polygon structure (a mesh) isobtained from volume data with the marching cubes algorithm (Lorensen and Cline,1987; Ulrich et al., 2014). Both the actual mesh construction and the post-processingoperations feature some subjective decisions, most prominently, choice of an iso-valuefor mesh construction. Thus, it is necessary to demonstrate that the 3D reconstruction iscorrect.We now present a method for controlling that 3D reconstructions tightly correspondto the original immunostained sections by directly comparing the reconstructions tothe original serial sections. This method accelerates QC and mesh colouring. QC isfacilitated by showing single sections in the visualised mesh, without volume rendering.We inspect, annotate, and colour 3D models (directly compared to original data, theserial sections) in virtual reality (VR). Figure 2 documents a QC session from ‘outside’.The presented method has been extensively used in microanatomical research (Steinigeret al., 2018a,b; Lobachev, 2018; Lobachev et al., 2019; Steiniger et al., 2020).Our domain experts are much better trained in distinguishing and analysing detailsin stained histological sections than in reconstructed meshes. However, only 3D re-constructions provide an overview of continuous structures spanning multiple sections,e.g., blood vessels.Further, the reconstructed mesh permits novel findings. In our prior experience,domain experts often encounter problems when trying to understand 3D arrangementsusing conventional mesh viewers. For this reason, we previously showed pre-renderedvideos to the experts to communicate the reconstruction results and to enable detailedinspection. Videos are, however, limited by the fixed direction of movement and thefixed camera angle. Our experience with non-immersive interactive tools has beennegative, both with standard (Cignoni et al., 2008) and with custom (Fig. 16) software.Further, our data su ff er from a high degree of self-occlusion. In VR the user can movefreely and thus intuitively control the angle of view and the movement through the model.In our experience, immersion allows for much easier and more thorough inspection ofvisualised data. Occlusion of decisive structures in the reconstruction does no longerpose a problem. Contributions
We present a modern, immersive VR approach for inspection, QC, and VA of histo-logical 3D reconstructions. Some of the reconstructed meshes are highly self-occluding,we are able to cope with this problem. For QC, the original data is simultaneously dis-played with the reconstruction. User annotations and mesh painting facilitate VA. Withour application, novel research results concerning the micoanatomy of human spleensbecame viable for the first time; our findings have been established and published in aseries of papers (Steiniger et al., 2018a,b, 2020).3 igure 2: A user in VR. The large display mirrors the headset image.
2. Related work
Immersive visualisation is not at all a new idea, Brooks Jr. (1999) quotes a visionof Sutherland (1965, 1968, 1970). However, an immersive scientific visualisation wasquite hard to obtain in earlier years, if a multi-million-dollar training simulation wasto be avoided (van Dam et al., 2000). The availability of inexpensive hardware suchas Oculus Rift or HTC Vive head-mounted displays (HMD) has massively changedthe game recently. This fact (and the progress in GPU performance) allows for goodVR experiences on commodity hardware. Immersive visualisation has been previouslysuggested for molecular visualisation (Stone et al., 2010), for medical volumetric data(Shen et al., 2008; Scholl et al., 2018), for dentistry (Shimabukuro and Minghim, 1998;Xia et al., 2013), and for computational fluid dynamics (Quam et al., 2015). Morerelevant to our approach are the visualisations of the inside of large arterial blood vessels(Forsberg et al., 2000; Egger et al., 2020). There is a trend is to utilise VR in medicaleducation and training (Walsh et al., 2012; Chan et al., 2013; Mathur, 2015; Moroet al., 2017; Bouaoud et al., 2020; L´opez Ch´avez et al., 2020; Pieterse et al., 2020);Uruthiralingam and Rea (2020) and Duarte et al. (2020) provide an overview. Theavailability of head-mounted displays has sparked some new research (Chen et al., 2015;Choi et al., 2016; Inoue et al., 2016) in addition to already covered fields. Mann et al.(2018) present a taxonomy of various related approaches (virtual reality, augmentedreality, etc.). Checa and Bustillo (2020) review VR applications in the area of seriousgames.A radically di ff erent approach is to bypass mesh generation altogether and to renderthe volumes directly. Scholl et al. (2018) do so in VR, although with quite small volumes4 a) (b)(c) (d)(e) (f)Figure 3: From section images to final results: A human spleen section is stained for SMA (brown), CD34(blue), and either CD271 (red) or CD20 (red), this is the ‘Sheaths alternating’ data set. (a): The region ofinterest (ROI), staining of B-lymphocytes (CD20) in red. (b): The ROI, staining of capillary sheaths (CD271)in red. (c): Result of colour deconvolution for CD271 of (b), a single image. (d): Same, but for CD34. (e):A volume rendering of the first 30 sheath-depicting sections of the ROI. (f): Final meshes. The colourshighlight the di ff erent functions. The arterial blood vessels are in blue and red. The red colour is highlightinga specific tree of blood vessels. The sheaths related to this tree are green, the unrelated sheaths are dark green.The follicular dendritic cells (that are also weakly CD271 + ) are depicted in light green. The SMA mesh wasused for a heuristics to find arterioles among blood vessels. SMA and B-lymphocytes are not shown in therendering. Although there have been precursors long before the ‘VR boom’, e. g., Tomikawaet al. (2010), most relevant publications on the use of VR in medical research, training,and in clinical applications appeared after 2017. This section focuses on medicalresearch.Stets et al. (2017) work with a point cloud. We work with surface meshes. Esfahlaniet al. (2018) reported on non-immersive VR in rehab. We use immersive VR in medicalresearch. Uppot et al. (2019) describe VR and AR for radiology, we use histologicalsections as our input data. Knodel et al. (2018) discuss the possibilities of VR in medicalimaging. Stefani et al. (2018) show confocal microscopy images in VR. We use imagesfrom transmitted light microscopy. Cal`ı et al. (2019) visualise glial and neuronal cellsin VR. We visualise blood vessels and accompanying cell types in lymphatic organs,mostly in the spleen.A visualisation support for HTC Vive in popular medical imaging toolkits has beenpresented before (Egger et al., 2017). Unlike our approach, this method is tied intoexisting visualisation libraries. Our method is a stand-alone application, even if easilyusable in our tool pipeline. Further, visualising both reconstructed meshes and originalinput data was a must in our use case. We also implemented a customised mesh paintingmodule for visual analytics. Both our approach and the works of Egger et al. (2017,2020) generate meshes prior to the visualisation. We discuss the di ff erences betweenEgger et al. (2020) and our approach on page 25.El Beheiry et al. (2019) analyse the consequences of VR for research. In theiropinion, VR means navigation, but also allows for better education and provides optionsfor machine learning. They can place annotations in their program, but focus on(immersed) measurements between the selected points. El Beheiry et al. perform somesegmentations in VR, but primarily work with image stacks. Our mesh painting in VRcan be seen as a form of segmentation, but we perform it on the 3D models, not onimage data. Mesh painting uses geodesic distances, as detailed in Section 5.7.Daly (2018; 2019a; 2019b) has similar goals to this work, however he uses a radicallydi ff erent tool pipeline, relying more on o ff -the-shelf software—which alleviates a larger6art of software development, but is also a limitation in the amount of features. Daly(and also others, e.g., Preim and Saalfeld, 2018) also focus a lot on teaching, we use oursystem at the moment mostly for research purposes.Dorweiler et al. (2019) discusses the implications of VR, AR, and further techno-logies in blood vessel surgery. We are concerned with analysis of microscopic bloodvessels in removed probes.The work by Liimatainen et al. (2020) allows the user to inspect 3D reconstructionsfrom histological sections (created in a radically di ff erent manner from how we section,they skip a lot of tissue in an e ff ort to cover a larger volume). The user can view thesections and ‘interact with single areas of interest’. This is elaborated to be a multi-scaleselection of the details and allowing the user to zoom in. We stay mostly at the samedetail level, but allow for more in-depth analysis. They put histological sections oftumors in their correct location in the visualisation, which was also one of the firstrequirements to our tool, as Section 3.1 details.We are not aware of other implementations of advanced VR- and mesh-based in-teractions, such as our mesh paining that follows blood vessels (Section 5.7). To ourknowledge, annotations have never before been implemented in the manner we use:The markers are preserved after the VR session and can be used in a mesh space forlater analysis. This paper presents both those features. In general, most VR-basedvisualisations focus on presentation and exploration of the data. We do not stop there,but also perform a lot of visual analytics. ffi ce While enough non-VR tools for medical visualisation exist, such as 3D Slicer(Pieper et al., 2004; Kikinis et al., 2014), ParaView (Ahrens et al., 2005; Ayachit,2015), or MeVisLab (Silva et al., 2009; Ritter et al., 2011), we are proponents ofVR-based visualisation. Rudimentary tasks in QC can be done, e. g., in 3D Slicer orusing our previous work, a custom non-VR tool (detailed below on page 23), but in ourexperience, our VR-based QC was much faster and also easier for the user. (Bouaoudet al. (2020) and L´opez Ch´avez et al. (2020) report similar experiences.) The navigationand generation of insights are a larger problem with non-VR tools. The navigation inVR is highly intuitive. A lot of insight can be gathered by simply looking at the modelfrom various views.The relation of implementation e ff orts to the usability impact was favourable for ourVR tool. The complexity of software development of large standard components alsoplays a role here. We base our VR application heavily on available components, suchas Steam VR and VCG mesh processing library, as Section 4.1 details. However, ourtool is not an amalgamation of multiple existing utilities (e.g., using Python or shell asa glue), but a stand-alone application, written in C ++ .Merely paging through the registered stack of serial sections does not convey aproper 3D perception. Single entities in individual sections (e.g., capillaries) have ahighly complex shape and are entangled among similar objects. While it is possible totrace a single entity through a series, gaining a full 3D perception is impossible withouta full-fledged 3D reconstruction. An inspection of our reconstructions in VR (Steinigeret al., 2018a,b, 2020) was much faster than a typical inspection of 3D data without VR(Steiniger et al., 2016), as Section 6 details.7 . Background Our domain experts provided feedback on the earlier versions of the software inorder to shape our application. The following features were deemed necessary bymedical researchers: • Load multiple meshes corresponding to parts of the model and to switch betweenthem. This allows for the analysis of multiple ‘channels’ from di ff erent stainings. • Load the original data as a texture on a plane and blend it in VR at will at thecorrect position. The experts need to discriminate all details in the original stainedsections. • Remove the reconstructed mesh to see the original section underneath. • Provide a possibility to annotate a 3D position in VR. Such annotations are crucialfor communication and analysis. • Adjust the perceived roles of parts of the meshes by changing their colour. Colourchanges form the foundation of visual analytics. • Cope with very complex, self-occluding reconstructions. Otherwise it is im-possible to analyse the microvasculature in thicker reconstructions (from about200 µ m in z axis onward). • Free user movement. This issue is essential for long VR sessions. Basically, nomovement control (e.g., flight) is imposed on the user. In our experience, freeuser movement drastically decreases the chances of motion sickness. • Provide a possibility for voice recording in annotations (work in progress). • Design a method for sharing the view point and current highlight with partnersoutside VR (trivial with Steam VR and its display mirroring), and for communic-ating the findings from live VR sessions as non-moving 2D images in researchpapers (an open question).
In a short summary our method includes:1. Biological processing: tissue acquisition, fixation, embedding, sectioning, stain-ing, coverslipping.2. Digital data acquisition: serial scanning in an optical scanning microscope.3. Coarse registration: fitting the sections to each other.4. Selection of regions of interest (ROIs).5. Fine-grain registration (Lobachev et al., 2017b). The decisive step for maintainingthe connectivity of capillaries.6. Colour processing, e.g., channel selection or colour deconvolution.7. Optional: healing of damaged regions (Lobachev, 2020).8. Interpolation to reduce anisotropy (Lobachev et al., 2017a).9. Volume filtering, e.g., a closing filter and a blur.10. Mesh construction (Ulrich et al., 2014).11. Mesh processing, e.g., decimation or repair (Ju, 2004).12. Mesh colouring, e.g., colouring of selected components (Steiniger et al., 2018a,b,2020) or visualisation of shape diameter function (Steiniger et al., 2016).13. QC as initial visual analytics. If the reconstruction, e.g., interrupts microvesselsor includes non-informative components, identify the cause and repeat from there.14. Further visual analytics, e.g., mesh colouring.15. Final rendering.Figure 3 showcases some important steps of this pipeline.
The human spleen is a secondary lymphatic organ which serves to immunologicallymonitor the blood. In order to intensify the contact between blood-borne molecules,which provoke immune reactions, and the specific immunocompetent lymphocytes andmacrophages, the spleen harbours a so-termed ‘open circulation system’. This systemis unique to the spleen. It does not comprise continuous arteries, arterioles, capillaries,venules and veins as in other organs, but there is an open vessel-free space betweencapillaries and the beginning of the draining venous system, which is passed by allconstituents of the blood. In addition, the initial venous vessels which re-collect theblood into the circulation system are also organ-specific and are termed ‘sinuses’.In humans, the initial parts of the splenic capillary network is covered by peculiarmulticellular arrangements termed ‘capillary sheaths’. The detailed function of thesesheaths is unknown, but comparative anatomy suggests, that they collect certain foreignmolecules from the blood and guide the immigration of special immunocompetentlymphocytes (B-lymphocytes) into the spleen.Prior to our works (Steiniger et al., 2018a,b), arrangement of capillary sheathshas never been shown in three dimensions. It has been unknown, whether all spleniccapillaries are covered by sheaths, how long the sheaths are, what shape they haveand, finally, which cell types they consist of. In addition, the location of the sheathswith respect to the open ends of capillaries feeding the ‘open circulation’ has remainedenigmatic.During recent years our research (Steiniger et al., 2018a,b, 2020) has clarifiedmany of these questions. We now report an advanced study comprising 3D modelsderived from up to 150 serial para ffi n sections stained for conventional transmitted lightmicroscopy utilising three di ff erent chromogens (brown, blue and red) to visualise fourmolecules by immunohistological methods. In detail, we demonstrate smooth musclealpha-actin (SMA), CD34, CD271 and CD20 (Table 1). The input data was generated from the registered stack of serial sections. Typicaldata volume was 2 . k × . k ×
161 voxels, with z -axis interpolation. The originaldata typically featured 21 to 24 sections, but we have used up to 150 sections in some9 able 1: Cell distribution of the molecules detected in human spleens. Target ExpressionSMA CD34 CD271 CD20 CD141Colour in (Steiniger et al., 2018a) – brown blue – –Colour in (Steiniger et al., 2018b) brown brown blue – –Colour in (Steiniger et al., 2020) brown blue red red –Colour in ‘sinus’ data set – blue red – brownendothelial cells (inner vessel lining) – + – – –smooth muscle cells (wall of arteries, arteri-oles) + – – – –fibroblasts (connective tissue cells) at sur-face of follicles + – +/ – – + ubiquitous fibroblasts in red pulp +/ – – +/ – – –fibroblastic capillary sheath cells – – + – –fibroblasts in trabeculae + +/ – – – –B-lymphocytes – – – + –sinus endothelia – +/ – – – + stained in alternating serial sections except endothelial cells of most sinuses most cells expression varies by proximity to a follicle reconstructions. The final meshes had typically 1.7M–2.3M vertices. (Most of theGPU memory used by the application was occupied by textures anyway.) Real timerendering was possible with Vive-native resolution at 90 fps. Our experiments withValve Index ran at even higher frame rates. Original data were projected as textures,typically at 2 . k × . k . Although we experimented with showing all sections at once,in a productive use only one section was shown at a time.We quality control the following data sets derived from the bone marrow of a 53-year-old male and from the spleen of a 22-year-old male. Acquisition of the specimenscomplied with the ethical regulations of Marburg University Hospital at the time ofprocessing.1. ‘Bone marrow’: stained with anti-CD34 plus anti-CD141 (both brown), 4 ROI3500 × µ m / pixel, 21 serial sections (Steiniger et al. (2016); Fig. 4,16).2. ‘Follicle-double’: spleen sections stained with anti-CD34 (brown) followed byanti-CD271 (blue), ROI 2300 × µ m / pixel, 24 serial sections(Steiniger et al. (2018a); Fig. 7);3. ‘Follicle-single’: spleen sections stained with anti-CD34 (brown), a ROI 4 k × k pixel, 0.3 µ m / pixel, 24 serial sections (Steiniger et al. (2018a); Fig. 8);4. ‘Red pulp’: spleen sections stained with anti-CD34 plus anti-SMA (both brown),followed by anti-CD271 (violet-blue, di ff erent pigment than above), 11 ROI2 k × k pixel at 0.5 µ m / pixel, 24 serial sections (Steiniger et al. (2018b); Figs. 9,13); 10 a) (b) (c)(d) (e) (f)Figure 4: Images, renderings, and VR screenshots showing mesh reconstructions of blood vessels in a humanbone marrow specimen, stained with anti-CD34 plus anti-CD141. (a): A single section. The staining colour isbrown for both molecules in the original data. (b): A volume rendering of 21 consecutive serial sections. (c):The reconstructed mesh. It shows shape diameter function values, colour-coded from red to green. (d): Weannotate a position of interest in the mesh in VR. An original section is seen in the background. (e): We havefound the section containing the annotation, the mesh is still visible. (f): Only the section with the annotationis shown in VR. Domain experts can now reason on the stained tissue at the marked position.
5. ‘Sheaths alternating’: 148 immunostained sections plus 2 HE sections stainedwith anti-SMA, anti-CD34 and anti-CD271. In every other section CD271 wasreplaced by CD20. We processed 4 ROI at 2 k × k pixel at 0.5 µ m / pixel, of whichfrom 82 to 148 sections were used (Lobachev et al., 2019; Steiniger et al., 2020;Lobachev, 2020), see also Figs. 3, 5, 10, 14, 15.6. ‘Sinus’: 21 spleen sections immunostained with anti-CD141 (brown, not shownhere), anti-CD34 (blue), anti-CD271 (red), work in progress with currently twoROI 2 k × k pixel at 0.44 µ m / pixel, shown in Figs. 11, 12. A phenotypicalinvestigation was performed in Steiniger et al. (2007), but without serial sectionsand capillary sheaths.
4. System architecture and features
Our application makes use of existing software libraries. We load meshes with theVCG library. Multiple meshes with vertex colours are supported. We utilise Open GL 4.2.11 a) (b)Figure 5: Working with our annotation tool. We show VR screenshots of our application, in a human spleen‘sheaths alternating’ data set (Steiniger et al., 2020). In Fig. (b) the front plane clipping is evident, viz. Fig. 10.Notice the Valve Index controller with a ball, showing an anatomical structure. In this manner, the VR usercan clarify some morphological details or demonstrate an issue to an audience outside VR.All images are produced with our VR tool. Similar illustrations can be found in (Steiniger et al., 2020).
We enable back face culling, multisampling, and mipmaps. The section textures areloaded with the FreeImage library. The Steam VR library is used for interaction withVive controllers and the headset.With respect to hardware, the system consists of a desktop computer with a Steam VR-capable GPU and a Steam VR-compatible headset with controllers.
For control, a simple keyboard and mouse interface (for debugging outside VR),XBox One controller, and Steam VR-compatible controllers can all be used. Our initialidea was to use an XBox 360 or an XBox One controller, as such controllers providea simple control metaphor. However, the expert users were not acquainted to gamingcontrollers and could not see the XBox One controller in VR. Thus, initial error rateswere high when they e.g., tried to simultaneously use an ‘X’ key and a ‘D-Pad’ in blind.Hence, a more intuitive approach with the native Vive controllers was targeted. Wehave kept the keyboard-and-mouse and the XBox controller options, but duplicatedrequired input actions with Vive controllers. Native HTC Vive controllers proved theirbenefits. Although the metaphors were much more complicated, the intuitive controlpayed o ff immediately. Further, the visibility of the tracked controllers in VR helped alot. Later on, we extended the application support to Valve Index knuckle controllers.Further spectators can follow the domain expert from ‘outside’ of the immersion,as the HMD feed is mirrored on a monitor. Further, annotations significantly improvecommunication (Figs. 7–9). The main controller actions of the domain expert are: • Blending the original data (the stained sections) in or out;12 igure 6: This figure demonstrates why we need geodesic distances for mesh painting in our VR application.The yellow circle is the painting tool. We would like to mark the green blood vessels inside the circle, butdo not want to co-mark the red blood vessel, even if it is also inside the circle. Red and green blood vesselsmight even be connected somewhere outside the circle, but the geodesic distance from the centre of the circle(the black dot) to any vertex of the red blood vessel is too large, even if they are reachable. Conversely, apart of the green blood vessel is selected, as a vertex of the green mesh is closest to the centre of the circle.As many vertices are selected as the geodesic distance (corresponding to the radius of the circle with someheuristics) allows for. • Blending the mesh(es) in or out; • Advancing the currently displayed section of original data; • Placing an annotation; • Mesh painting.The most significant user interaction happens with intuitive movements of theimmersed user around (and through) the displayed entities in VR.
Without a beacon visible in VR it is almost impossible to understand what the experttries to show. With a VR controller and our annotation tool, interesting areas in thevisualisation can be shown to the outside spectators in real time.
We designed a spherical selection tool for marking points in space (Fig. 5). Thesphere is located at the top front of a Vive or Index controller and can be seen in thevirtual space (and, by proxy, also in the mirrored display, Fig. 2). We need to note,however, that the annotation sphere appears much more vivid to the VR user than itappears on screenshots. User’s movement and live feedback are in our opinion a major13eason for such a di ff erence in perception. Figures 4d–4f, 5, 7b, 8, show our annotationtool in images captured from VR sessions.The annotations and mesh modifications are saved for further analysis. For example,after the domain expert has marked suspicious areas, the 3D reconstruction expert caninspect them in a later VR session. Reconstruction improvements can be deduced fromthis information. If a ‘direct’ rendering approach is used, there is a very dominant aliasing e ff ect atcertain points. We used multisampling (MSAA) on meshes and mipmaps on textures toalleviate this problem. Consider the interplay between the model and original serial sections. A section isnot an infinitely thin plane. We show the original data as an opaque cuboid that is onesection thick in the z direction and spans over the full surface in the xy plane. The actualdata points of the mesh, corresponding to the displayed section, are inside the opaqueblock. Decisive parts of the mesh are occluded by the front face of the cuboid. On theone hand, this is, of course, not desired, and requires correction. On the other hand, thebehaviour of the model spanning for multiple sections in front of the current section isbest studied when it ends at the front face of the cuboid. The solution is to enable ordisable front face culling of the original data display at will.With front face culling enabled, the user can look inside the opaque block withoriginal section texture. This is well suited for the inspection of lesser details and smallartefacts. (Figure 9, (d), (e) features a real-life example, observe the edge of the sectionshown.) The general behaviour of the model across multiple sections can be trackedmore easily with front faces of the original section on display. The presence of bothrepresentations accelerates QC. We also implemented a VR-based mesh painting facility, mostly based on MeshLabcode base (Cignoni et al., 2008). In this mode the colour spheres, which our user canplace with the controller, produce a geodesically coloured region on the mesh instead ofan annotation. These two functions, annotations and mesh painting, are conveyed to beclearly di ff erent to the user.The selected colour is imposed on all vertices inside the geodesic radius from thecentre of the sphere. We would like to paint on, for example, a part of a blood vesselthat has a specific property. At the same time, we would like not to colour other bloodvessels that might be inside the painting sphere, but are not immediately related to theselected blood vessel. This is facilitated with geodesic distances, as Figures 6 shows.The markings from mesh painting lead to the final separation of the entities (such asblood vessel types, kinds of capillary sheaths, etc.) in the visualisation.14 a) (b)(c) (d)(e) (f)Figure 7: Real or artefact? The models are derived from human spleen sections from the ‘follicle-double’data set. These sections were stained for CD34 (brown in staining, yellow in the reconstruction) and forCD271 (blue). In VR we spotted and annotated putative capillaries inside follicles (large blue structures, a, b).We can look at the meshes only (c) or also show the original data (d). A closer view (e), (f) confirms: thereconstruction is correct, these structures are CD34 + objects inside the follicle. As the structures in questioncontinue through multiple sections, they do not represent single CD34 + cells. Hence the objects in questionmust be blood vessels. The reconstruction is correct, the brown structures are real.All images in this figure are screenshots from our application. Similar results can be found in (Steiniger et al.,2018a).igure 8: A VR screenshot showing mesh reconstructions of blood vessels in a human spleen specimen,anti-CD34 staining, ‘follicle-single’ data set (Steiniger et al., 2018a). Unconnected mesh components wereset to distinct colours. The user is highlighting a smaller blood vessel that follows larger ones with the HTCVive controller. The classic view frustum in computer graphics consists of six planes, four ‘displayedges’ building the frustum sides, a back plane (the farthest visible boundary) and thefront plane, the closest boundary. The clipping planes are recomputed when the camera(i.e., the user) is moving. In the cases when there are too many self-occluding objectsin the scene, the observer cannot ‘pierce through’ further than few closest objects. Inother words, the observer can only see the closest objects. (This fact motivates occlusionculling.)Such an occlusion was the case with our denser data sets. With a simple change inthe code, we moved the front plane of the view frustum further away, in an adjustablemanner. Basically, the user ‘cuts’ parts of the reconstruction in front of their eyes,allowing for the detailed inspection of the inside of the reconstruction.This adjustment is very minor from the computer graphics point of view, but it wasvery much welcomed by our actual users, the medical experts. With appropriate frontplane clipping set at about 60 cm from the camera, it becomes possible to inspect verydense medical data sets from ‘inside’. (Figs. 5, 10, 14, 15 demonstrate this e ff ect.) Theuser ‘cuts away’ the currently unneeded layers with their movements.
5. Results
We conducted most of our investigations on a 64-bit Intel machine with i7-6700KCPU at 4 GHz, 16 GB RAM, and Windows 10. We used NVidia GTX 1070 with 8 GBVRAM, and HTC Vive. 16ur VR application was initially developed with HTC Vive in mind; it performedwell on other headsets, such as HTC Vive Pro Eye and Valve Index. We observedconvincing performance on Intel i7-9750H at 2.6 GHz, 64 GB RAM (MacBook Pro 16 (cid:48)(cid:48) )and NVidia RTX 2080 Ti with 11 GB VRAM in Razor Core X eGPU with HTC VivePro Eye, as well as on AMD Ryzen 2700X, 32 GB RAM, NVidia RTX 2070 Super with8 GB VRAM with Valve Index. Our application also should perform well with furtherheadsets such as Oculus Rift. It was possible to use previous-generation GPUs, we alsotested our application with NVidia GTX 960. Overall, it is possible to work with ourapplication using an inexpensive setup.The largest limitation factor seems to be the VRAM used by the uncompressedoriginal image stack. The second largest limitation is the number of vertices of thevisualised meshes and the rasteriser performance in case of very large, undecimatedreconstructions.
We have reconstructed the 3D shape of smaller and larger microvessels in hard,undecalcified, methacylate-embedded human bone serial sections (Steiniger et al., 2016).Shape diameter function on the reconstructed mesh allows to distinguish capillaries fromsinuses. Figure 4 shows (a) a single section (part of the input data to reconstruction), (b)a volume rendering of all 21 sections, and (c) our 3D reconstruction. In Steiniger et al.(2016) we did not use VR. Here, we use the same data set to showcase some featuresof our VR-based method. It took us much more manpower and time to validate thereconstructions then, without VR, as Section 6 details (Fig. 16).The process of annotation is demonstrated in Fig. 4, (d). The next subfigures showfurther investigation of the annotated area in VR either in combined mesh-sectionview (e), or showing the corresponding section only (f). To discriminate betweencapillaries (smaller, coloured red) and sinuses (larger, coloured green), we computedshape diameter function on the reconstructed meshes and colour-coded resulting valueson the mesh, as shown in (c)–(e). The handling of the reconstruction and serial sectiondata in VR showcases the annotation process.
The human spleen contains accumulations of special migratory lymphocytes, theso-termed follicles. We reconstructed the capillaries inside and outside the follicles(Steiniger et al., 2018a). We show some results from this work in this section and in thenext. Fig. 7 presents one of three ROIs that were quality controlled.Our 3D reconstruction demonstrates that follicles are embedded in a superficialcapillary meshwork resembling a basketball basket. Figure 7 shows that our VR toolenables easy annotation and projection of the original data leading to further results(Steiniger et al., 2018a). In Fig. 7, (e), some brown dots have been marked inside afollicle. The 3D model shows, that the dots indeed represent capillaries cut orthogonallyto their long axis. Thus, we additionally find that infrequent capillaries also occur insidethe follicles. The superficial capillary network of the follicles is thus connected to veryfew internal capillaries and to an external network of capillaries in the red pulp. Weobserved the latter two networks to have a shape which is totally di ff erent from the17 a)(b) (c)(d) (e)Figure 9: Investigating the annotated regions, VR screenshots of our application.The human spleen ‘redpulp’ data set is used (Steiniger et al., 2018b); we have annotated some ends of capillary sheaths in meshesreconstructed from human spleen data. (a): Overview. (b)–(d): Original data and an annotation. Experts canmore easily reason on such visualisations because of 3D perception and intuitive navigation. (e): The sameannotation as in (d), showing additionally the mesh for sheaths. To continue the investigation of capillaries inside and outside the follicles, Fig. 8shows that the annotated elongated structures in the follicles and in the T-cell zone atleast partially belong to long capillaries, which accompany the outside of larger arteries,so-termed vasa vasorum . With our VR-based method, we investigated this 4 k × k ROI at 0.3 µ m / pixel and three further ROIs (not shown) with 1600 × µ m / pixel (Steiniger et al., 2018a). Fig. 8 also shows a Vive controller tracing oneof the longer capillaries with the annotation ball as a form of communication of thefindings to spectators outside VR. The location of capillary sheaths in human spleens has not been clarified in detailuntil recently (Steiniger et al., 2018b). Our 3D reconstructions indicate that sheathsprimarily occur in a post-arteriolar position in the part of the organ, which does notcontain lymphocyte accumulations (so-termed red pulp), although length and diameterof the sheaths are variable. Many sheaths are interrupted by the boundaries of theROI. (The remedy was a longer series of sections, as presented in Section 5.6.) Forthis reason it makes sense to collect only sheaths which are completely included in thereconstruction. Such a selection was done with our VR classification tool.Figure 9, (a) shows an overview of the annotations. In Figs. 9, (b)–(d) it becomesclear, that the sheaths indeed end at the marked positions. Notice the enabled frontface culling on the section cuboid in the closeups. Figure 9, (e) additionally shows thereconstructed meshes for the sheaths. We show a single ROI at 2 k × k pixels. We haveinspected 11 such ROIs in VR. The ‘sheaths alternating’ data set with up to 150 sections was created to furtherinvestigate the morphology and (to some extent) the function of capillary sheaths(Steiniger et al., 2020). The resulting 3D data set was extremely dense. The increasedamount of ‘channels’ and the nature of the study (tracking the blood vessels) was abig challenge. The amount of the reconstructed blood vessels and their self-occlusionprohibited any possible insight when viewing them from the outside. Here we utilisedfront plane clipping (Section 4.8). Figures 5b, 10, 14 (and also Fig. 11 for ‘sinus’ dataset) showcase this minor, but important adjustment. Figs. 14, 15 further demonstrate thecomplexity of the ‘sheaths alternating’ data set.
As already seen in Figs. 5, 7, (a), (b), 8, we can point to anatomical structures withthe Valve Index controller. Similarly, annotations can be placed and mesh fragmentscan be painted in di ff erent colours. An example of real-life mesh painting with geodesic19 a) (b)(c) (d)Figure 10: Showcasing front plane clipping on ‘sheaths alternating’ data sets (Steiniger et al., 2020). (a):A complete data set in a frontal visualisation. (b): The user cuts into objects of interest using clipping. (c)–(d):Utilisation of clipping during the exploration of the data set.All images are produced with our VR tool either directly or with Steam VR interface. (a), (b) were featured ina poster (Lobachev et al., 2019). (a), (c), (d): Similar illustrations can be found in (Steiniger et al., 2020). distances is in Figure 12. The arrows show a part of a structure already painted by userin red in (a). It is painted back to blue in (b).In this manner we have refined an automatic heuristics for arterioles (larger bloodvessels, red in Fig. 13) to be always correct and to lead up to the capillary sheaths.With a similar tool, working on unconnected components, we changed the colourof the sheaths in the ‘red pulp’ and ‘sheaths alternating’ data sets. The colour changee ff ected the classification of the sheaths. The sheaths were initially all blue in ourvisualisations of the ‘red pulp’ data set. Sheaths around capillaries, following knownarterioles, were then coloured green. We also annotated very few capillaries that should have a sheath, but did not (white). Figure 13 shows one of the final results, a connectedvascular component with accompanying sheaths, follicle structures, and smooth muscleactin. 20 igure 11: Cutting structures open with front clipping plane, using the ‘sinus’ data set. Capillaries are blue,capillary sheaths are green in the reconstruction. An original section is visible on the right. Figure 14 underlines the complexity of the ‘sheaths alternating’ data set. Bothimages in this figure show the final results of mesh painting, the actual work is alreadydone. Still, they convey how interwoven and obscure the original situation is. CD271 + cells, mostly present in capillary sheaths, are in various shades of green in this figure.Figure 14 is highly complex; the fact that something can be seen in still images is themerit of applied annotations and mesh painting, of an active front plane clipping, and ofa proper choice of a view point. A viewer in VR has no problem navigating such datasets because of the active control of view direction and of movement dynamics. Thelatter also means a control of the front plane clipping dynamics.Figure 15 shows a further development: a separated arteriole with its further capillarybranches and capillary sheaths (Steiniger et al., 2020). The sheath in the figure was cutopen by front plane clipping. This ‘separation’ was generated from user inputs in VR,similar to the previous figure. Now, however, the complexity is reduced to a degree, thatallows showing the still image in a medical research publication.Summarising, our visualisations in VR were used to obtain insights (Steiniger et al.,2018b, 2020) on the position of the capillary sheaths—a problem that was initiallydiscussed (Schweigger-Seidel, 1862) more than 150 years ago!
6. Discussion
The novelty of this work stems from how VR streamlines and facilitates better QCand VA of our reconstructions. The presented VR-based tool plays an important role inour pipeline. ‘Usual’ 3D reconstructions from serial histological sections are known,but are quite rare, because they involve cumbersome tasks. The reasons for this arethreefold: technical di ffi culties to create them; struggles to QC the reconstructions;21 a)(b) (c)Figure 12: An ongoing session of mesh painting with geodesic distances as VR screenshots. We use the‘sinus’ data set. (a): Notice the huge annotation ball on the controller, the bright red dot is its centre. Thiscentre is the starting point of the geodesic computation, initiated by the trigger on the controller. The largeradius of the marking tool is bounded by the connectivity: the vertices which are within the radius, but are notconnected to the starting point or are ‘too far’ geodesically, are not painted.(b): For better visibility, we show a crop from the left eye view of (a). The white arrow shows a point ofinterest.(c): An excessive marking (white arrow), is removed with a repeated painting operation. On the bottom leftin (a), (c) a Valve Index controller is visible. The background colour in this figure signifies the highlightingmode used.igure 13: Final result of mesh annotation and painting, ‘red pulp’ data set (Steiniger et al., 2018b). Bloodvessels are yellow. Certain support structures in the spleen that feature smooth muscle actin are alsoreconstructed and displayed in yellow. (A trained histologist can discern these structures from various kinds ofblood vessels though.) Some of the blood vessels (red) lead from larger blood vessels (arterioles) to capillarysheaths (green). Some sheaths are fed by arterioles not traced in the reconstruction. These sheaths are markedblue. Finally, while some capillaries are red (having green sheaths), some other capillaries, coming from thesame arteriole, do not have a sheath at all. Such capillaries are coloured in white. The background is black tobetter discern the white colour.A similar, but di ff erent image appeared in (Steiniger et al., 2018b) under CC-BY 4.0 license. investigation and comprehension problems in dense, self-occluding reconstructions.A proper registration, correct reconstruction options, and possibly also inter-slice in-terpolation are necessary for creating a satisfying reconstruction. For QC we need tovisually ensure the correctness of processing, identify what the reconstruction actuallyshows, keep artefacts at bay by creating a better reconstruction if needed. Finding agood reconstruction from previously unexplored data with a new staining is an iterativeprocess. While we create the reconstructions quite e ffi ciently, QC was a lot of workin the past. With an immersive VR application, QC is much easier and faster, in ourexperience.Annotations and mesh colouring provide for visual analytics abilities and thesefacilitate better distinction between various aspects of the reconstruction. To give someexample, capillary sheaths, surely following arterioles, can be separated from capillarysheaths, the origins or ends of which lie outside of the ROI. Such distinctions allow forbetter understanding in microanatomical research.Our experience emphasises the importance of VR-based QC. Our older 3D recon-struction study (Steiniger et al., 2016) featured 3500 × ×
21 voxels in four regions.23 a)(b)Figure 14: The natural complexity of the ‘sheaths alternating’ data set (Steiniger et al., 2020) is very high.With mesh annotation, painting, and removal of irrelevant details we were able to keep the complexity at atolerable level.(a): The capillary network of the splenic red pulp is blue. Arterioles have been highlighted in red. Capillarysheaths are green and dark green, depending on whether they belong to the red vessels. Special supportivecells in a follicle are light green. An arteriole (red) is entering from left, one of the branches is splitting upinto capillaries (still red) that immediately enter capillary sheaths. One such sheath (green) is is cut open,showing the capillary inside.(b): Arterioles and sheathed capillaries are light blue, capillary sheaths are green. The open-ended sidebranches of sheathed capillaries are red. This figure shows a single system, starting with an arteriole. It hasbeen separated from other arterial vessel systems in the surroundings. Front plane clipping opens the capillarysheath and shows the blue capillary inside. We see some open green capillary sheaths with light blue ‘mainline’ blood vessels inside.Similar, but di ff erent figures can be found in Steiniger et al. (2020).igure 15: Final result of mesh annotation, painting, and removal of irrelevant details. The complexity is nowgreatly reduced. This is the ‘sheaths alternating’ data set (Steiniger et al., 2020). The meshes are cut open bythe front clipping plane. Blood vessels are red, capillary sheaths are green, cells in follicle are light green.An arteriole (red) is entering from left in the proximity of a follicle, this arteriole splits further, one of thebranches is splitting up into capillaries (still red) that immediately enter capillary sheaths. One such sheath(green) is curved around the follicle. The sheath is cut open, showing the capillary inside.A similar, but di ff erent figure, depicting a di ff erent sheath, can be found in Steiniger et al. (2020). From each reconstruction a further version was derived. They did not need to be qualitycontrolled again, but their inspection was crucial to produce a diagnosis for medicalresearch. We used both a non-VR visualisation tool for QC and pre-rendered videos forinspection. It took a group of 3 to 5 experts multiple day-long meetings to QC thesereconstructions with the non-VR tool (Fig. 16). Deducing the anatomical findings frompre-rendered videos was also not easy for the domain experts.We found free user movement essential for long-term usability of our application—our users spend hours immersed in consecutive sessions. Basically, the model is nototherwise translated, rotated, or scaled in the productive use, but only in response totracking and reacting to user’s own movements. Such free user movement allows theimmersed user to utilise their brain’s systems for spatial orientation and spatial memory.In their turn, the recognition and annotation of structures become easier. Free usermovement also distinguishes our application from Egger et al. (2020): they used a VRflight mode on 3D models from CT.We first found the benefits of VR-based visualisation during the preparation of25teiniger et al. (2018a). Unlike the bone marrow data set (Steiniger et al., 2016), in ournext work (Steiniger et al., 2018b), the total number of voxels was slightly larger, andQC was much faster with the new method. Our domain expert alone quality controlledwith our VR-based method eleven regions with 2000 × ×
24 voxels per day in oneinstance (Steiniger et al., 2018b) and two to four ≈ × ×
84 regions per day inanother instance (Steiniger et al., 2020). These sum up to slightly more than 10 voxelsper day in the first case and up to 1 . · voxels per day in the second case. We wouldlike to highlight, that these amounts of data were routinely quality controlled by a singleperson in a single day. Thus VR immersion saved an order of magnitude of man-hoursfor QC of our medical research 3D reconstructions (Steiniger et al., 2018a,b, 2020).Our immersive application also enabled VA of the same reconstructions. Withoutimmersive VA and (later on) interactive ‘cutting’ into the reconstructions with frontplane clipping in VR, it would be exorbitantly harder or even impossible for us to obtainthe research results, summarised in Figs. 13–15 (Steiniger et al., 2018b, 2020).
7. Conclusions
3D reconstructions from histological serial sections require quality control (QC)and further investigations. Domain experts were not satisfied by previously existing QCmethods. We present a VR-based solution to explore mesh data. Our application alsoallows to superimpose the original serial sections. Such display is essential for QC. In ourexperience, immersion accelerates QC by an order of magnitude. Our users can annotateareas of interest and communicate the annotations. VR-powered VA allowed for a moreexact and fast distinction and classification of various microanatomical entities, such aspost-arteriolar capillaries and other kinds of capillaries. The classification of arterialblood vessels in its turn facilitated the classification of capillary sheaths. Summarising,our VR tool greatly enhances productivity and allows for more precise reconstructionsthat enable new insights (Steiniger et al., 2018a,b, 2020) in microanatomical research.
Making our application an even a better visual analytics tool is always viable. Minorimprovements at user input handling include more input combinations and gestures.A planned feature is to record spoken annotations for every annotation marker. Recordedmemos would facilitate better explanation of markings at their revision. The applicationhas a potential to evolve in the direction of a non-medical 3D sculpting utility. A bettermaintainability of the code base through an excessive use of software product lines(Apel et al., 2016) is an important goal. Not all builds need all features and softwareproduct lines can accommodate this point.Improvements of the rendering performance are both important and viable. Possiblepoints of interest are better occlusion culling (e. g., Mattausch et al., 2008; Hasselgrenet al., 2016) and progressive meshes (e.g., Derzapf and Guthe, 2012). There are furtherways to improve the anti-aliasing and thus even further improve the immersive userexperience. A possibility to consider is an advanced interpolation for higher internalframe rates.A promising idea is to learn better view angles (similar to Burns et al., 2007) fromthe transformation matrices saved as parts of annotations. Better pre-rendered videos26 a) (b)Figure 16: Showcasing our non-VR volume renderer. Endothelia of blood vessels are stained brown in‘bone marrow’ data set. The blended-in mesh is blue. The volume renderer played an important role in dataverification for our publication (Steiniger et al., 2016). (a) shows volume data representation, (b) presents thevisualisation of the final, filtered mesh vs. corresponding single section. might be produced in this manner. (Guti´errez et al., 2018, have a similar motivation.)Texture compression in general and volume compression techniques in particular, [e.g.,Guthe et al. (2002);Guthe and Goesele (2016);Guarda et al. (2017) } , would help toreduce the GPU memory consumption caused by data for original slices.VR might be the pivotal instrument for better understanding in teaching complex 3Dstructures (Philippe et al., 2020), e.g., in medicine or in machine engineering. An e ff ectof VR in training and education in such professions (and also in other areas, e.g., Calvertand Abadia, 2020) might need a more detailed assessment.Of course, viable future work includes applications of our visualisations to recon-structions of further organs and tissues (e.g., future bone marrow, lung, heart, or tonsilprobes) and expansion to further modalities of medical (such as MRI or CT) or non-medical data. Recently, we experimented with VR presentation of serial block faceelectron microscopy data. Multi-modality is an interesting topic, too (Tang et al., 2020).Possible examples of further applications include materials science, computational fluiddynamics, and, most surely, computer graphics. Acknowledgements
Most of this work was done when the first two authors were members of Universityof Bayreuth. We would like to thank Vitus Stachniss, Verena Wilhelmi, and ChristineUlrich (Philipps-University of Marburg) for their e ff orts with non-VR QC tool. We thankChristian M¨uhlfeld (Hannover Medical School) for the possibility to test our applicationon a MacBook Pro 16 (cid:48)(cid:48) . Paul Schmiedel (then: University of Bayreuth) worked on thecodebase of the VR tool in Bayreuth. Figure 2 depicts Lena Voß, we would like to thankher for the permission to use this image. References
Ahrens, J., Geveci, B., Law, C., 2005. ParaView: An end-user tool for large data visual-ization. The visualization handbook 717. URL: http://datascience.dsscale.org/wp-content/uploads/2016/06/ParaView.pdf . LA-UR-03-1560.27pel, S., Batory, D., K¨astner, C., Saake, G., 2016. Feature-oriented software productlines. Springer. doi: .Ayachit, U., 2015. The ParaView guide: a parallel visualization application. Kitware,Inc.Berg, L.P., Vance, J.M., 2017. Industry use of virtual reality in product design and man-ufacturing: a survey. Virtual Real. 21, 1–17. doi: .Bouaoud, J., El Beheiry, M., Jablon, E., Schouman, T., Bertolus, C., Picard, A., Mas-son, J.B., Khonsari, R.H., 2020. DIVA, a 3D virtual reality platform, improvesundergraduate craniofacial trauma education. Journal of Stomatology, Oral and Max-illofacial Surgery URL: , doi: .Brooks Jr., F.P., 1999. What’s real about virtual reality? IEEE Comput. Graph. 19,16–27. doi: .Burns, M., Haidacher, M., Wein, W., Viola, I., Gr¨oller, M.E., 2007. Feature emphasisand contextual cutaways for multimodal medical visualization, in: Proceedings of the9 th Joint Eurographics / IEEE VGTC Conference on Visualization, EG. pp. 275–282.doi: .Calvert, J., Abadia, R., 2020. Impact of immersing university and high school students ineducational linear narratives using virtual reality technology. Computers & Education159, 104005. doi: .Cal`ı, C., Kare, K., Agus, M., Veloz Castillo, M.F., Boges, D., Hadwiger, M., Magistretti,P., 2019. A method for 3D reconstruction and virtual reality analysis of glial andneuronal cells. JoVE 151, e59444. doi: .Chan, S., Conti, F., Salisbury, K., Blevins, N.H., 2013. Virtual reality simulation inneurosurgery: Technologies and evolution. Neurosurgery 72, A154–A164. doi: .Checa, D., Bustillo, A., 2020. A review of immersive virtual reality serious gamesto enhance learning and training. Multimed. Tools Appl. 79, 5501–5527. doi: .Chen, X., Xu, L., Wang, Y., Wang, H., Wang, F., Zeng, X., Wang, Q., Egger, J.,2015. Development of a surgical navigation system based on augmented reality usingan optical see-through head-mounted display. J. Biomed. Inform. 55, 124–131.doi: .Choi, K.S., Chan, S.T., Leung, C.H.M., Chui, Y.P., 2016. Stereoscopic three-dimensional visualization for immersive and intuitive anatomy learning, in: IEEE8 th International Conference on Technology for Education, IEEE. pp. 184–187.doi: . 28ignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G., 2008.Meshlab: an open-source mesh processing tool., in: Eurographics Italian ChapterConference, pp. 129–136.Daly, C.J., 2018. The future of education? Using 3D animation and virtual reality inteaching physiology. Physiology News 111, 43. URL: .Daly, C.J., 2019a. From confocal microscope to virtual reality and computer games;technical and educational considerations. Infocus Magazine 54, 51–59. URL: http://eprints.gla.ac.uk/188105/ .Daly, C.J., 2019b. Imaging the vascular wall: From microscope to virtual reality, in:Touyz, R.M., Delles, C. (Eds.), Textbook of Vascular Medicine. Springer, Cham, pp.59–66. doi: .van Dam, A., Forsberg, A.S., Laidlaw, D.H., LaViola, J.J., Simpson, R.M., 2000.Immersive VR for scientific visualization: a progress report. IEEE Comput. Graph.20, 26–52. doi: .Derzapf, E., Guthe, M., 2012. Dependency-free parallel progressive meshes. Comput.Graph. Forum 31, 2288–2302. doi: .Dorweiler, B., Vahl, C.F., Ghazy, A., 2019. Zukunftsperspektiven digitaler Visu-alisierungstechnologien in der Gef¨aßchirurgie. Gef¨asschirurgie 24, 531–538.doi: .Duarte, M.L., Santos, L.R., Guimar˜aes J´unior, J.B., Peccin, M.S., 2020. Learninganatomy by virtual reality and augmented reality. A scope review. Morphologiedoi: .Egger, J., Gall, M., Wallner, J., Boechat, P., Hann, A., Li, X., Chen, X., Schmalstieg, D.,2017. HTC Vive MeVisLab integration via OpenVR for medical applications. PLOSONE 12, 1–14. doi: .Egger, J., Gunacker, S., Pepe, A., Melito, G.M., Gsaxner, C., Li, J., Ellermann, K., Chen,X., 2020. A comprehensive workflow and framework for immersive virtual endoscopyof dissected aortae from CTA data, in: Fei, B., Linte, C.A. (Eds.), Medical Imaging2020: Image-Guided Procedures, Robotic Interventions, and Modeling, SPIE. pp.774–779. doi: .El Beheiry, M., Doutreligne, S., Caporal, C., Ostertag, C., Dahan, M., Masson, J.B.,2019. Virtual reality: Beyond visualization. J. Mol. Biol. 431, 1315–1321.doi: .Esfahlani, S.S., Thompson, T., Parsa, A.D., Brown, I., Cirstea, S., 2018. ReHabgame: Anon-immersive virtual reality rehabilitation system with applications in neuroscience.Heliyon 4, e00526. doi: .29aludi, B., Zoller, E.I., Gerig, N., Zam, A., Rauter, G., Cattin, P.C., 2019. Direct visualand haptic volume rendering of medical data sets for an immersive exploration invirtual reality, in: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap,P.T., Khan, A. (Eds.), Medical Image Computing and Computer Assisted Intervention– MICCAI 2019, Springer, Cham. pp. 29–37. doi: .Forsberg, A.S., Laidlaw, D.H., van Dam, A., Kirby, R.M., Karniadakis, G.E., Elion,J.L., 2000. Immersive virtual reality for visualizing flow through an artery, in:Visualization, IEEE Computer Society Press. pp. 457–460.Guarda, A.F.R., Santos, J.M., da Silva Cruz, L.A., Assunc¸ ˜ao, P.A.A., Rodrigues,N.M.M., de Faria, S.M.M., 2017. A method to improve HEVC lossless codingof volumetric medical images. Signal Process.-Image 59, 96–104. doi: .Guthe, S., Goesele, M., 2016. Variable length coding for GPU-based direct volumerendering, in: Vision, Modeling & Visualization, EG. pp. 77–84. doi: .Guthe, S., Wand, M., Gonser, J., Strasser, W., 2002. Interactive rendering of largevolume data sets, in: IEEE Visualization, pp. 53–60. doi: .Guti´errez, J., David, E., Rai, Y., Callet, P.L., 2018. Toolbox and dataset for thedevelopment of saliency and scanpath models for omnidirectional / .Hasselgren, J., Andersson, M., Akenine-M¨oller, T., 2016. Masked software occlusionculling, in: Proceedings of High Performance Graphics, EG. pp. 23–31.Inoue, A., Ikeda, Y., Yatabe, K., Oikawa, Y., 2016. Three-dimensional sound-fieldvisualization system using head mounted display and stereo camera. Proceedings ofMeetings on Acoustics 29, 025001–1–13. doi: .Ju, T., 2004. Robust repair of polygonal models. ACM T. Graphic. 23, 888–895.Kikinis, R., Pieper, S.D., Vosburgh, K.G., 2014. 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support, in: Jolesz, F.A. (Ed.),Intraoperative Imaging and Image-Guided Therapy, Springer. pp. 277–289. doi: .Knodel, M.M., Lemke, B., Lampe, M., Ho ff er, M., Gillmann, C., Uder, M., Hillengaß,J., Wittum, G., B¨auerle, T., 2018. Virtual reality in advanced medical immersiveimaging: a workflow for introducing virtual reality as a supporting tool in medicalimaging. Comput. Visual Sci. 18, 203–212. doi: .Liimatainen, K., Latonen, L., Valkonen, M., Kartasalo, K., Ruusuvuori, P., 2020. Virtualreality for 3D histology: multi-scale visualization of organs with interactive featureexploration. arXiv:2003.11148 [cs, eess] URL: http://arxiv.org/abs/2003.11148 . arXiv: 2003.11148. 30obachev, O., 2018. On Three-dimensional reconstruction. Habilitation thesis.University of Bayreuth.Lobachev, O., 2020. The tempest in a cubic millimeter: Image-based refinementsnecessitate the reconstruction of 3D microvasculature from a large series of damagedalternately-stained histological sections. IEEE Access 8, 13489–13506. doi: .Lobachev, O., Pfe ff er, H., Guthe, M., Steiniger, B.S., 2019. Inspecting human 3Dhistology in virtual reality. Mesoscopic models of the splenic red pulp microvas-culature computed from immunostained serial sections, in: 114 th Annual Meeting,Anatomische Gesellschaft, Institut f¨ur Anatomie und Zellbiologie der Universit¨atW¨urzburg. Poster.Lobachev, O., Steiniger, B.S., Guthe, M., 2017a. Compensating anisotropy inhistological serial sections with optical flow-based interpolation, in: Proceedingsof the 33 rd Spring Conference on Computer Graphics, ACM. pp. 14:1–14:11.doi: .Lobachev, O., Ulrich, C., Steiniger, B.S., Wilhelmi, V., Stachniss, V., Guthe, M., 2017b.Feature-based multi-resolution registration of immunostained serial sections. Med.Image Anal. 35, 288–302. doi: .Lorensen, W.E., Cline, H.E., 1987. Marching cubes: A high resolution 3D surfaceconstruction algorithm. ACM SIGGRAPH Computer Graphics 21, 163–169. doi: .L´opez Ch´avez, O., Rodr´ıguez, L.F., Gutierrez-Garcia, J.O., 2020. A comparativecase study of 2D, 3D and immersive-virtual-reality applications for healthcare edu-cation. International Journal of Medical Informatics 141, 104226. doi: .Mann, S., Furness, T., Yuan, Y., Iorio, J., Wang, Z., 2018. All reality: Virtual, augmented,mixed (x), mediated (x, y), and multimediated reality. arXiv:1804.08386 . arXivpreprint.Mathur, A.S., 2015. Low cost virtual reality for medical training, in: 2015 IEEE VirtualReality (VR), pp. 345–346. doi: .Mattausch, O., Bittner, J., Wimmer, M., 2008. CHC ++ : Coherent hierarchical cullingrevisited. Comput. Graph. Forum 27, 221–230. doi: .Misiak, M., Schreiber, A., Fuhrmann, A., Zur, S., Seider, D., Nafeie, L., 2018. IslandViz:A tool for visualizing modular software systems in virtual reality, in: 2018 IEEEWorking Conference on Software Visualization (VISSOFT), IEEE. pp. 112–116.doi: . 31oro, C., ˇStromberga, Z., Raikos, A., Stirling, A., 2017. The e ff ectiveness of virtualand augmented reality in health sciences and medical anatomy. Anat Sci Educ 10,549–559. doi: .Philippe, S., Souchet, A.D., Lameras, P., Petridis, P., Caporal, J., Coldeboeuf, G., Duzan,H., 2020. Multimodal teaching, learning and training in virtual reality: a review andcase study. Virtual Reality & Intelligent Hardware 2, 421–442. doi: .Pieper, S., Halle, M., Kikinis, R., 2004. 3D Slicer, in: IEEE International Symposiumon Biomedical Imaging: Nano to Macro, pp. 632–635. doi: .Pieterse, A.D., Hierck, B.P., de Jong, P.G.M., Kroese, J., Willems, L.N.A., Reinders,M.E.J., 2020. Design and implementation of ‘AugMedicine: Lung Cases,’ anaugmented reality application for the medical curriculum on the presentation ofdyspnea. Front. Virtual Real. 1. doi: . publisher:Frontiers.Preim, B., Saalfeld, P., 2018. A survey of virtual human anatomy education systems.Comput. Graph. 71, 132–153. doi: .Quam, D.J., Gundert, T.J., Ellwein, L., Larkee, C.E., Hayden, P., Migrino, R.Q., Otake,H., LaDisa, Jr., J.F., 2015. Immersive visualization for enhanced computationalfluid dynamics analysis. J. Biomech. Eng.-T. ASME 137, 031004–031004–12.doi: .Ritter, F., Boskamp, T., Homeyer, A., Laue, H., Schwier, M., Link, F., Peitgen, H.O.,2011. Medical image analysis. IEEE Pulse 2, 60–70. doi: .Rizvic, S., Boskovic, D., Okanovic, V., Kihic, I.I., Sljivo, S., 2019. Virtual realityexperience of Sarajevo war heritage, in: Rizvic, S., Rodriguez Echavarria, K. (Eds.),Eurographics Workshop on Graphics and Cultural Heritage, EG. doi: .Scholl, I., Suder, S., Schi ff er, S., 2018. Direct volume rendering in virtual reality,in: Maier, A., Deserno, T.M., Handels, H., Maier-Hein, K.H., Palm, C., Tolxdor ff ,T. (Eds.), Bildverarbeitung f¨ur die Medizin 2018, Springer, Berlin, Heidelberg. pp.297–302. doi: .Schweigger-Seidel, F., 1862. Untersuchungen ¨uber die Milz. Arch. Pathol. Anat. Ph.23, 526–570. doi: .Shen, R., Boulanger, P., Noga, M., 2008. MedVis: A real-time immersive visualizationenvironment for the exploration of medical volumetric data, in: Information Visualiz-ation in Medical and Biomedical Informatics, pp. 63–68. doi: . 32himabukuro, M.H., Minghim, R., 1998. Visualisation and reconstruction in dentistry,in: Information Visualization, pp. 25–31. doi: .Silva, S., Santos, B.S., Madeira, J., Silva, A., 2009. Processing, visualization andanalysis of medical images of the heart: An example of fast prototyping usingMeVisLab, in: 2009 2 nd International Conference in Visualisation, pp. 165–170.doi: .Slater, M., Gonzalez-Liencres, C., Haggard, P., Vinkers, C., Gregory-Clarke, R., Jelley,S., Watson, Z., Breen, G., Schwarz, R., Steptoe, W., Szostak, D., Halan, S., Fox, D.,Silver, J., 2020. The ethics of realism in virtual and augmented reality. Front. virtualreal. 1, 1. doi: .Stefani, C., Lacy-Hulbert, A., Skillman, T., 2018. ConfocalVR: Immersive visualizationfor confocal microscopy. J. Mol. Biol. 430, 4028–4035. doi: .Steiniger, B., Stachniss, V., Schwarzbach, H., Barth, P.J., 2007. Phenotypic di ff erencesbetween red pulp capillary and sinusoidal endothelia help localizing the open spleniccirculation in humans. Histochem. Cell. Biol. 128, 391–398. doi: .Steiniger, B.S., Pfe ff er, H., Guthe, M., Lobachev, O., 2020. Exploring human splenicred pulp vasculature in virtual reality. Details of sheathed capillaries and the opencapillary network. Histochem. Cell Biol. URL: https://rdcu.be/b8KgZ , doi: .Steiniger, B.S., Stachniss, V., Wilhelmi, V., Seiler, A., Lampp, K., Ne ff , A., Guthe,M., Lobachev, O., 2016. Three-dimensional arrangement of human bone marrowmicrovessels revealed by immunohistology in undecalcified sections. PLOS ONE 11,1–25. doi: .Steiniger, B.S., Ulrich, C., Berthold, M., Guthe, M., Lobachev, O., 2018a. Capillarynetworks and follicular marginal zones in human spleens. Three-dimensional modelsbased on immunostained serial sections. PLOS ONE 13, 1–21. doi: .Steiniger, B.S., Wilhelmi, V., Berthold, M., Guthe, M., Lobachev, O., 2018b. Locatinghuman splenic capillary sheaths in virtual reality. Sci. Rep. 8, 15720. doi: .Stets, J.D., Sun, Y., Corning, W., Greenwald, S.W., 2017. Visualization and labeling ofpoint clouds in virtual reality, in: SIGGRAPH Asia 2017 Posters — SA ’17, ACM,Bangkok, Thailand. pp. 1–2. doi: .Stone, J.E., Kohlmeyer, A., Vandivort, K.L., Schulten, K., 2010. Immersive molecularvisualization and interactive modeling with commodity hardware, in: Bebis, G.,Boyle, R., Parvin, B., Koracin, D., Chung, R., Hammound, R., Hussain, M., Kar-Han, T., Crawfis, R., Thalmann, D., Kao, D., Avila, L. (Eds.), Advances in VisualComputing, Springer. pp. 382–393. doi: .33utherland, I.E., 1965. The ultimate display, in: Proceedings of IFIP Congress,, pp.506–508.Sutherland, I.E., 1968. A head-mounted three dimensional display, in: Proceedings ofthe December 9-11, 1968, fall joint computer conference, part I, ACM. pp. 757–764.doi: .Sutherland, I.E., 1970. Computer displays. Sci. Am. 222, 56–81. URL: .Tang, L., Tian, C., Li, L., Hu, B., Yu, W., Xu, K., 2020. Perceptual quality assessmentfor multimodal medical image fusion. Signal Process.-Image 85, 115852. doi: .Tomikawa, M., Hong, J., Shiotani, S., Tokunaga, E., Konishi, K., Ieiri, S., Tanoue, K.,Akahoshi, T., Maehara, Y., Hashizume, M., 2010. Real-time 3-dimensional virtualreality navigation system with open MRI for breast-conserving surgery. J. Am. Coll.Surgeons 210, 927–933. doi: .Ulrich, C., Grund, N., Derzapf, E., Lobachev, O., Guthe, M., 2014. Parallel iso-surface extraction and simplification, in: Skala, V. (Ed.), WSCG Communicationsproceedings. URL: http://hdl.handle.net/11025/26435 .Uppot, R.N., Laguna, B., McCarthy, C.J., De Novi, G., Phelps, A., Siegel, E., Courtier,J., 2019. Implementing virtual and augmented reality tools for radiology educationand training, communication, and clinical care. Radiology 291, 570–580. doi: .Uruthiralingam, U., Rea, P.M., 2020. Augmented and virtual reality in anatomicaleducation – a systematic review, in: Rea, P.M. (Ed.), Biomedical Visualisation.Springer, Cham. volume 1235 of Advances in Experimental Medicine and Biology ,pp. 89–101. doi: .Walsh, C.M., Sherlock, M.E., Ling, S.C., Carnahan, H., 2012. Virtual reality simulationtraining for health professions trainees in gastrointestinal endoscopy. Cochrane Data-base of Systematic Reviews URL: http://doi.wiley.com/10.1002/14651858.CD008237.pub2 , doi: .Wißmann, N., Miˇsiak, M., Fuhrmann, A., Latoschik, M.E., 2020. Accelerated stereorendering with hybrid reprojection-based rasterization and adaptive ray-tracing, in:2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 828–835.doi: . iSSN: 2642-5254.Xia, P., Lopes, A.M., Restivo, M.T., 2013. Virtual reality and haptics for dental surgery:a personal review. Vis Comput 29, 433–447. doi: .Zoller, E.I., Faludi, B., Gerig, N., Jost, G.F., Cattin, P.C., Rauter, G., 2020. Forcequantification and simulation of pedicle screw tract palpation using direct visuo-hapticvolume rendering. Int. J. Comput. Ass. Rad. doi: .34 ppendix A. Short glossary of medical terms • Histology : the science studying biological tissues at microscopic level. • Histological section : a thin slice of a tissue that can be inspected under a micro-scope. The sectioning happens on a device called ‘microtome’. Sections shouldbe stained (coloured) for better results. • Serial sections : a consecutive series of histological sections. After registrationthe series forms volume data. • Vasculature : a network of blood vessels. • Capillary : the smallest type of blood vessels. • Sinus : a capillary with large diameter and specialised wall structure in specificorgans, such as spleen or bone marrow. Depending on the organ, sinuses di ff er inwall structure and function. Sinuses only occur in bone marrow regions whereblood cells are generated (‘hematopoietic regions’). Sinuses are ubiquitous inthe spleen. Whether (venous) spleen sinuses are connected to capillaries of thearterial side, is an open question. • Follicle : generally, a round structure. In spleen and tonsils, for example, folliclesare important dynamic structures where specific white blood cells often appear.This makes follicles important for understanding immune system functions. • Spleen : one of the organs which essentially function as a blood filter. The spleenhas a unique vasculature, where blood flow also occurs outside blood vessels. Itfeatures some unique structures both with respect to the vasculature and to thearrangement of two major types of migratory white blood cells called lymphocytes,which either occupy follicles or T-cell zones. • Bone marrow : the inside of bones not only contains fatty tissue, but also regionsresponsible for generation of new blood cells. Bone marrow features a uniquevasculature with capillaries and sinuses. • Staining : for better visual inspection, specific parts of the tissue sections can becoloured. • Immunohistology : specific detection of di ff erent molecules in tissue sectionsusing antibody solutions. Binding of an antibody to a tissue component is finallyvisualised by deposition of a coloured insoluble polymerisation product of a previ-ously uncoloured soluble stain. The stainings can, for example, detect membraneglycoproteins in the innermost cells of blood vessels, so-termed endothelial cells.Membrane glycoproteins have been numbered in the order of their discoveryusing the CD (cluster of di ff erentiation) nomenclature. • MRI , magnetic resonance imaging, and CT , computed tomography from a seriesof X-ray images are non-invasive imaging techniques that revolutionised the dia-gnostics. Unfortunately, the spatial resolution and selectivity of these techniquesare not enough for our goals. 35 Anti-CD34 staining : primarily stains endothelial cells of arterial vessels andcapillaries in human spleen and bone marrow. Typically used colour is brown.Some stem cells in bone marrow are also stained. It is also weakly present insinus endothelia in the proximity of follicles. • Anti-CD141 staining : stains sinus endothelial cells in human bone marrow and inhuman spleen. Typically used colour is brown. • Anti-SMA staining : stains smooth muscle alpha-actin. It is present, e.g., inwalls of larger blood vessels on the ‘input’ arterial side, the so-called ‘arterioles.’Typically used staining colour is brown. • Anti-CD271 staining : stains capillary sheath cells and additional fibroblast-likecells in human spleen. The sheaths are multi-cellular structures around the initialsegment of human splenic capillaries. Sheath cells obviously represent the sessilefibroblast-derived part of capillary sheaths. Typically used staining colour is blueor red. Specialised fibroblasts inside the follicles are more weakly stained withthis antibody. • Anti-CD20 staining : stains B-lymphocytes. Typically used colour is red.
Appendix B. Supplementary Material
The supplementary video for this paper is available under https://zenodo.org/record/4268535 . It shows the details of our visualisations in dynamics. The videoalso showcases various approaches towards visual analytics. All imagery in the video isa real-time feed from VR headset.The datasets analyzed in this study can be found in the Zenodo repositories https://zenodo.org/record/1039241 , https://zenodo.org/record/1229434 , https://zenodo.org/record/4059595https://zenodo.org/record/4059595