Learning Soft Tissue Behavior of Organs for Surgical Navigation with Convolutional Neural Networks
Micha Pfeiffer, Carina Riediger, Jürgen Weitz, Stefanie Speidel
NNoname manuscript No. (will be inserted by the editor)
Learning Soft Tissue Behavior of Organs for SurgicalNavigation with Convolutional Neural Networks
Micha Pfeiffer · Carina Riediger · J¨urgenWeitz · Stefanie Speidel
Received: date / Accepted: date
Abstract
Purpose: In surgical navigation, pre-operative organ models are pre-sented to surgeons during the intervention to help them in efficiently finding theirtarget. In the case of soft tissue, these models need to be deformed and adaptedto the current situation by using intra-operative sensor data. A promising methodto realize this are real-time capable biomechanical models.Methods: We train a fully convolutional neural network to estimate a displace-ment field of all points inside an organ when given only the displacement of a partof the organ’s surface. The network trains on entirely synthetic data of randomorgan-like meshes, which allows us to generate much more data than is otherwiseavailable. The input and output data is discretized into a regular grid, allowing usto fully utilize the capabilities of convolutional operators and to train and infer ina highly parallelized manner.Results: The system is evaluated on in-silico liver models, phantom liver dataand human in-vivo breathing data. We test the performance with varying materialparameters, organ shapes and amount of visible surface. Even though the networkis only trained on synthetic data, it adapts well to the various cases and gives agood estimation of the internal organ displacement. The inference runs at over 50frames per second.Conclusions: We present a novel method for training a data-driven, real-timecapable deformation model. The accuracy is comparable to other registrationmethods, it adapts very well to previously unseen organs and does not need tobe re-trained for every patient. The high inferring speed makes this method usefulfor many applications such as surgical navigation and real-time simulation.
Keywords
Surgical Navigation · Soft Tissue · Biomechanical Model · OrganDeformation · Convolutional Neural Network
Micha Pfeiffer (micha.pfeiff[email protected]) · Stefanie SpeidelNational Center for Tumor Diseases (NCT), Partner Site Dresden, GermanyCarina Riediger · J¨urgen WeitzDepartment for Visceral, Thoracic and Vascular Surgery, University Hospital, Technical Uni-versity Dresden a r X i v : . [ c s . G R ] M a r Micha Pfeiffer et al.0 cm 8 cm 0 cm 2 . Fig. 1
Liver displacement estimation. a) We present a discretized liver geometry to our convo-lutional neural network along with a zero-displacement boundary condition (red) and a visiblesurface area displacement (orange). b) The network estimates the displacement of every organpoint (blue) from the known surface displacement. c) The actual displacement is known froma previously run simulation (blue, maximum magnitude of 7.3 cm). d) The error between theestimated displacement and the real displacement (visualized here for an internal slice throughthe liver) shows that points close to the visible surface area are displaced correctly while themaximum error of roughly 2.5 cm occurs on the opposite side of the organ.
An important prerequisite for soft tissue navigation during surgery is the sim-ulation and prediction of tissue behavior. Here, the goal is to aid surgeons byvisualizing information which is typically hidden - such as the position of tumorsand blood-vessels - in a context-sensitive manner.The usual workflow is to first generate a pre-operative model from CT or MRIdata, containing structures of interest such as vessels and tumors. During thesurgery, these structures are visualized for navigation. However, as organs deformduring a surgery, the models have to be adapted to correctly reflect the intra-operative scene. This requires intra-operative sensor data such as video, CT orultrasound to capture the state of the organs which can then be used to deformthe pre-operative model so that it matches the current situation. Since this taskinvolves many sub-stages which should ideally all run in real-time (image pro-cessing, registration of organs, soft tissue simulation, rendering of virtual reality),it is desirable to use systems which require as little computational resources aspossible.When acquiring intra-operative sensor data to feed into the model, there aremany challenges. Besides the limited field of view and noisy data, there are of-ten unknown parameters which are either difficult or impossible to obtain duringa surgery, such as knowledge about unseen organ surfaces, friction between or-gans and material parameters such as the tissue’s elasticity. Approximations haveto be used and the deformation model needs to be able to deal with uncertaininformation.The real-time simulation of deformable elastic bodies is a complex problemand an active area of research. Many pre- to intra-operative registration systems are based on the Finite Element Method (FEM) which can accurately calculatedeformations of organs [27,24,22,16,20,26]. The FEM has several benefits like thepossibility to precisely represent complex meshes by using small irregular volumeelements and the ability to compute physically accurate solutions. It can be madereal-time capable [30,2,3], but due to the large, often sparse matrices the speed earning Soft Tissue Behavior 3 gain obtained by running the code on the GPU is limited. Changes in the meshtopology (such as a cut) require a computationally expensive update of the topol-ogy and the matrices [30]. It is also difficult to incorporate unknown materialparameters or uncertain boundary conditions [21].Recently, convolutional neural networks (CNNs) have proven to be a power-ful tool in building many data-driven models [8], with the benefit of parallelizingwell on modern GPUs. In this work, we explore the usage of a CNN to estimatean organ’s internal deformation from known surface deformation. The network istrained on synthetic FEM simulation data and learns to interpret the mesh struc-ture of an organ as well as boundary conditions to calculate the displacement ofinternal points. Our goal is to determine the displacement of internal structureswhen knowing about the displacement of (some) surface points. We assume thatsurface correspondences between pre-operative and intra-operative models havebeen computed using data from an intra-operative modality such as the laparo-scope.Our contribution consists of a novel method to generate a real-time capablesoft tissue model which immediately generalizes to new patients, by training on alarge number of synthetic meshes. We show that our model performs well on bothsynthetic and real data even without knowing the precise nature of the underlyingmaterial model. Since the training data makes very few assumptions about thestructure of the organ and the type of deformation, our method shows that neuralnetworks can indeed be taught how soft tissue deformation works in a generalsetting. In effect, we substitute a highly engineered computational model for adata-driven one and in doing so we reach a very low computation time. Our codeis available online .1.1 Related WorkThe idea of using neural networks to estimate the outcome of an FEM simula-tion is not new. Hambli et al. [10] have used a fully connected net to predict thevelocity and angle of a tennis ball after hitting a racket. A similar approach hasbeen adopted by Tonutti et al. [29] for predicting the movement of a tumor dur-ing brain surgery, but the displacement of the healthy tissue is not considered.Rechowicz et al. [23] estimate the displacement of a patient’s rib cage, but onlypredict the displacement of surface nodes. In contrast, Marooka et al. [18] estimatethe displacement of a full liver model using neural networks by superimposing ba-sic deformation modes. These approaches show that neural networks can indeedbe trained to estimate soft tissue behavior. However, they work with the data ofa single patient, requiring re-training (and in some cases even re-designing) thenetwork for every new patient. Additionally, they use the acting surface forces asinput which are very difficult to obtain intra-operatively.Yamamoto et al. [31] show that neural networks can estimate the deformationof a liver from the known displacement of a partial surface. They report very small errors while using only 3% of the liver surface, but also design their network forspecific patients and evaluate their method on the same liver mesh that was usedduring training. Code publically available at: https://gitlab.com/nct_tso_public/cnn-deformation-estimation
Micha Pfeiffer et al.
Approaches which generalize to new patients were made by Lorente et al.[13], who estimate the liver’s deformation from breathing motion using variousmachine learning methods and Mart´ınez-Mart´ınez et al. [15] who estimate breastcompression. However, both methods focus on single scenarios where the maindirection of the forces stay similar throughout all experiments.Our method is inspired by the work of Guo et al. [9], who have used neuralnetworks to estimate the results of fluid dynamics simulations for real-time appli-cations. One major difference is that fluid dynamics are often computed on regulargrids with a fixed number of cells which makes them easier to handle with neuralnetworks. In contrast to this, we need to re-sample the irregular simulation domainbefore we pass the data to the network.
The goal of this work is to estimate the current displacement of an organ and itsinternal structures when given a) the pre-operative geometric model of the organand b) the displacement of the visible part of the organ’s surface as seen by anintra-operative sensor. Since CNNs work best with a regular grid, we discretize alldata at regularly spaced points. In doing so, our network’s input becomes a cubeof voxels and the network’s output becomes a displacement field with a three-dimensional displacement vector for each of these voxels.We sample the cubical volume of side length L into a grid G of N × N × N points of interest. For each point p ∈ G we determine three properties: – Organ structure: To represent the mesh, we determine the signed distance s ( p ) ∈ R for each point p to the nearest surface of the organ in meters. – Visible displacement: We assume that the displacement of part of the organsurface is known, for example by tracking features with a (stereo-)laparoscope.For these surface points, we assign a visible displacement vector u vis ∈ R . Forall other points, u vis ( p ) = (0 , , T . – Zero displacement: We assign a binary value z ( p ) which is set to one if a pointof the surface is fixed to surrounding tissue and should be considered static.For all other points, z is set to zero.Knowing s , u vis and z , the goal is to find a displacement vector u ∈ R foreach point p . The following sections explain how we randomly generate trainingdata, how we build the network and how the network is trained to generate thedisplacement field u for an organ.2.1 Synthetic Training DataSince training neural networks can be very time consuming, it is often not feasible to train a network before surgery using patient-specific data. Instead, we generatesynthetic datasets which are based on simulations of random, organ-like meshes(see Fig. 2). First, a random surface mesh is generated by extruding and deforminga mesh primitive multiple times. The volume inside this surface is then filled withtetrahedral elements with the Gmsh [6] software. We choose boundary conditions earning Soft Tissue Behavior 5(a) (b) (c) (d)30 cm c m Fig. 2 a) Random mesh structure, b) signed distance function s on the grid G , c) visibledisplacement u vis (blue, magnitude) and zero displacement z (red), d) target displacement u tar (magnitude). On b), half of the grid is hidden. In c) and d) (signed distance function s >
0) are clipped away to show only the organ structure. which are inspired by laparascopic surgeries: A zero displacement boundary con-dition is applied to a random surface region with a radius ranging from 2.5 cm to5.5 cm (indicating areas where the organ is fixed to other organs) and a randomforce between 0 and 1 N is applied to another random surface region with a radiusranging from 1.5 to 2.5 cm (simulating instruments which manipulate the organ).A non-linear, homogeneous, isotropic material model is chosen for the mesh,because we expect large deformations and expect to know very little about apatient’s specific tissue parameters. The Youngs Modulus is set to 1.7 kPa andthe Poisson Ratio to 0.35. The
Elmer simulation software [14] is used to run thesteady-state simulations, resulting in a known target displacement vector u tar foreach vertex (internal and external) of the random meshes.The grid G with side length L = 30 cm and N = 64 is generated and z and u tar are calculated for each point p via interpolation with a gaussian kernel. Togenerate the visible displacement u vis , another random surface region is selectedand the target displacement vector u tar is copied to u vis for each point in the area(for other points, u vis ( p ) = (0 , , T ).This process is repeated for 10 000 random meshes. If a randomly extrudedmesh lies partly outside the 30 cm large cube or if it has self-intersections, thesample is discarded. The same is done if the deformation is larger than 10 cm, sincethese samples are much rarer and result in a very unevenly distributed dataset.We augment the training data by flipping the grid along the X, Y and Z axis orany combination of the three, resulting in a factor eight increase in the number oftraining samples. Of the resulting dataset, roughly 90% (37440 samples) are usedfor training and the remaining data is used for validation. Before passing the datato the network, we scale s and z by a factor of 0 . G . Each point p ∈ G has five valuesassigned to it ( s , z and the three values from u vis ), so the input to the networkcontains N × N × N × u for each p , the output of the network is N × N × N × Micha Pfeiffer et al.
64 32 16
16 32 64
64 128 128 64 16 364 32 162568
Fig. 3
Network architecture. Each bar represents the output or input to a convolutional layer.The network downsamples the input data to lower resolutions while increasing the number ofchannels per grid point. Then it increases resolution while decreasing the number of chan-nels. Number of channels are shown above the bars and side lengths of the voxel grids areshown underneath (only changes are indicated). Skip-connections allow the network to accessearlier information by copying feature maps to later layers. Some layers generate additional(downsampled) outputs (yellow bars).
When learning displacement fields, a force acting on one side of the meshusually influences not only the displacement in nearby points but can potentiallyhave an effect on all organ points, even those on the far side of the organ. Thismeans that in our network, each output must have the potential to be influenced by each input point (i.e. each output point must have a receptive field spanning the entire input).The network uses an architecture similar to U-Net [25], with an encoder whichreduces the input resolution and learns a high-level representation of the data anda decoder which reconstructs the high resolution output. We also use skip connec-tions which copy features forward without modifying them, to allow the decoder toincorporate more detail in the computed output. Unlike U-Net, our network workswith three-dimensional input data and all convolutions are calculated across thethree dimensions.The decoder uses average pooling layers to decrease the resolution of the in-put data, leading to a bottleneck with a resolution of 8 . In the bottleneck, theconvolutional kernels are large enough to carry information across the computa-tional domain, meeting the requirement for the large field of view in the outputlayers. The decoder has three upsampling layers, each of which performs a simplenearest-neighbour interpolation to double the resolution, ensuring that the net-work’s output resolution is the same as the input resolution. All convolutions havea kernel side length k s of 3 and padding of 1 and each is followed by a SoftSignnon-linear activation function.For all points p which lie outside the organ ( s ( p ) > u est ( p ) isset to zero. The final network architecture is depicted in Fig. 3. It has about 9.1million learnable parameters. earning Soft Tissue Behavior 7 network to estimate correct displacements in the encoding- and bottleneck-layerswhile letting the decoding layers focus on increasing resolution. Thus the networkcomputes additional lower-resolution outputs u i at intermediate steps (compareFig. 3) and a down-scaled version of the target displacement u tar,i is created foreach resolution. The mean square error is then calculated for each resolution i : L i ( u i , u tar,i ) = 1 N i (cid:88) p O ( p ) (cid:107) u i ( p ) − u tar,i ( p ) (cid:107) (1)where N i is the number of points at resolution i , O ( p ) is zero if p is outside theorgan and one otherwise and (cid:107) (cid:5) (cid:107) denotes the magnitude of a vector. In practice, i ∈ { , , , } . The final error is the weighted sum of the errors over the fourdifferent resolutions: L ( u, u tar ) = (cid:88) i λ i L i ( u i , u tar,i ) (2)where the λ are weighting factors. We choose λ = λ = λ = λ = 1. Thenetwork is trained using the Adam [5] optimizer until the error on the validationdataset no longer decreases. We perform multiple experiments using the displacement estimation network. Ina first in-silico test, we test whether the network can generalize from the synthetictraining structures to the shape of a real liver as segmented from CT data and howthe network deals with varying amounts of visible surface. Secondly, the networkis tested on CT data of a phantom liver model undergoing a large deformation. Ina final test, we use human in-vivo data showing liver deformation due to breathingand let the network register the liver in the inhaled state to the liver in exhaledstate.All of these experiments are very different from the training data, as theycontain never before seen mesh structures, material parameters and deformations.Thus, we implicitly evaluate the network’s ability to generalize from the synthetictraining data to settings which are closer to real-world scenarios.Training as well as experiments were carried out on a personal computer withfour Intel ® i7 cores (4.20GHz) and an Nvidia GeForce GTX 1080 GPU with 8GB of video RAM.3.1 In-Silico: Liver Mesh from CTWe generate a new dataset with the same methods previously described for thetraining data (Section 2.1), but instead of a random mesh, we use the mesh of apatient’s liver (OpenHELP Phantom [11]). Again, random zero displacement and forces act on the organ and the Elmer software calculates displacements. Afterseparating out samples with a displacement greater than 10 cm, this process resultsin 1334 deformed liver samples. Each of these samples has the same mesh structurebut a different u tar , z and u vis . The network runs on each sample, generating anestimated displacement field u est (see Fig. 1). Micha Pfeiffer et al. u tar ( p ) [cm]2468 u e s t ( p ) [ c m ] A v g . E ( p ) [ c m ] d ( p ) = 2 ± 0.5 cm u tar ( p ) [cm] d ( p ) = 4 ± 0.5 cm u tar ( p ) [cm] d ( p ) = 6 ± 0.5 cm Fig. 4
Visualization of metrics for the deformed liver dataset, at three different depths (leftcolumn: d ( p ) ≈ cm , center column d ( p ) ≈ cm , right column: d ( p ) ≈ cm ). For all six plots,the abscissa indicates the magnitude of u tar , i.e. by how much a point should be displaced.The top row of plots show how the average error increases with increasing target displacementas well as with increasing depth. To generate the bottom row of plots, the points were sortedinto bins of size 0.2 cm according to the magnitudes of their u tar and u est and the color of theplots indicates how many points end up in each bin. Most points (note the logarithmic scale)lie close to the diagonal, indicating that the network displaced them by the correct amount.As the distance from the visible area increases (rightmost plot), so does the average error. Itcan also be seen that the network is more likely to over-estimate the displacement than tounder-estimate it. For all plots, points outside the organ ( s ( p ) >
0) are ignored.
For each point p inside the liver, a displacement error E ( p ) can be calculatedgiven the actual displacement u tar ( p ) and the estimated displacement u est ( p ) as E ( p ) = (cid:107) u tar ( p ) − u est ( p ) (cid:107) . On average, this error is expected to increase as thedistance to the visible displacement increases. To quantify and visualize this effect,the points are sorted by their distance to the nearest visible surface point in thecurrent sample. We call this distance the depth d ( p ) and collect all points fromall 1334 data samples which have a similar depth. Fig. 4 shows how the averageerrors increase with the depth as well the target displacement u tar ( p ).The average error is also expected to depend on the amount of visible surface.To show how the error behaves as a larger percentage of surface becomes visible,we also sort the samples by percentage of visible surface and plot the average error(see Fig. 5).3.2 Phantom: Silicone Liver DeformationTo test the network’s ability to transfer its learned displacement estimation, weused the data of a silicone liver undergoing a large deformation (Liver registration dataset, Suwelack et al. [27], open-cas.org ). The dataset contains the surface S O of the original, undeformed liver and a second surface S D which shows the sameliver after it has been deformed by applying a strong force to its side using aspherical object (Fig. 6, left). Six small Teflon markers were placed into the liverand their positions before and after the deformation can be used to determine a earning Soft Tissue Behavior 9 A v g . e rr o r [ c m ] Fig. 5
Average error in the 1334 in silico liver samples by their amount of visible surface. Tocompute the averages, samples were first sorted into bins with a width of 10% each. The largestdrop in error is when approximately 20% of the whole organ surface is visible. Incidentally,this corresponds to the amount of surface used in [27] and is roughly the amount of surfacewhich is accessible in a laparoscopic setting.
Fig. 6
Left: Original, undeformed CT model (grey) deformed by a strong force (white arrow),resulting in a large deformation (blue). Yellow arrows indicate the manually annotated sparsesurface point correspondences. Center: The surface areas where a visible displacement (orange,values interpolated from sparse surface dislacement) and zero displacement condition (red) areset. Right: The network’s estimated displacement field, applied to the points. target registration error. Due to the deformation, they move by up to 46.6 mm(mean 23.9 mm).Again, we calculate the signed distance function s for every grid point usingthe original surface S O . The zero displacement condition z is set for approximately21% of the liver surface where the organ was fixed to neighboring structures. Togenerate the visible displacement u vis , we manually annotate 13 points on theanterior side of the surface S O and their correspondences on S D . This sparsedisplacement information is then interpolated to other surface points, giving us anapproximated dense surface displacement for roughly 18.5% of the organ surface(Fig. 6, center).We use the network to estimate the internal displacement of the organ. Thefinal registration error for the Teflon markers is 5.1 mm on average, with a maxi- mum of 7.6 mm. Suwelack et al. [27] used 19% of the surface area and also reporteda mean error of 5.1 mm (maximum: 6.2 mm). While our method only takes a smallfraction of the computation time (20 milliseconds compared to multiple seconds),we note that their method also solves the surface correspondence problem, whichwe assume as given. Fig. 7
Human breathing motion experiment. In all images, blue represents the source (inhaled)state and grey is the target (exhaled) state. Left: After rigidly aligning the livers, there is stilla non-rigid deformation of up to 2 cm, as seen in the liver surfaces (background) and theportal veins (zoomed box). Center: Using the full visible displacement from CPD, the networkestimates a displacement field which is applied to the vessels, resulting in a good registrationfor all three main branches. Right: We decrease the amount of visible surface area (orange,shown here for 18% of visible surface). The registration is still accurate close to the visiblesurface (right vessel branch) and becomes slightly less accurate as we go deeper into the liver. ircad.fr/research/3d-ircadb-02/ ). This dataset con-tains surface meshes of abdominal organs in an inhaled and exhaled state. Ourgoal is to register the liver in inhaled state to the exhaled (target) state.First, we perform a rigid registration using the ICP algorithm [4]. To estimatethe surface displacement, we then use the coherent point drift (CPD) algorithm[19] on the liver surface model of the inhaled state to register it onto the livermodel in exhaled state. We generate our grid G from the inhaled state and set thezero displacement condition in the area where the vena cava touches the liver. TheCPD’s output is interpolated into G to generate the visible displacement u vis .We iteratively decrease the amount of visible surface by moving a plane fromthe patient’s posterior to their anterior and discarding all visible displacementinformation on the posterior side of the plane. We run the network on each ofthese steps, giving us a displacement field estimation for each amount of visiblesurface. The generated displacement fields are used to deform the segmented portalvein tree inside the inhaled liver and compared to the portal veins of the exhaledliver. Since contrast agent was injected before the experiment, the two vessel treesdiffer a lot visually. This makes them difficult to compare quantitatively, but theresults can be compared qualitatively (Fig. 7). Our experiments show that the network can easily generalize to organ structureswhich it has never seen during training. In fact, when we tried further trainingthe network with the patient specific mesh, no significant improvement was found. earning Soft Tissue Behavior 11
Even though we used the same material parameters for all training samples, thenetwork performed well on evaluation data with different material properties. Theaccuracy of the model depends mainly on the amount of visible surface and howclose this visible area is to the area of largest deformation. This is in accordancewith other research which indicates that the boundary conditions and a goodgeometric model have a higher influence on correct tissue modeling than the usedmaterial model [17][28].The surface information available in the phantom experiment was very sparse,yet the very simple interpolation scheme we used to generate the dense surfacedisplacement did not stop the network from creating satisfactory results. Similarly,in all experiments, the network deals well with the boundary conditions eventhough they are interpolated into our relatively coarse grid, indicating that it isnot very susceptible to noise.When training on purely artificial data, care must be taken that the data isrepresentative of the real-world problem which the network should solve. We makemultiple assumptions and simplifications which need to be addressed in the future:During our discretization, we place a grid point roughly every 4.7 mm. Whilethis resolution is similar to many real-time biomechanical models inside the organ,it results in a rough approximation of the organ surface.Secondly, the training dataset does not simulate gravity and varying atmo-spheric pressure. The network shows promising results on the phantom and thehuman liver, both of which feature gravity, but adding body forces to the trainingdataset could further improve the network’s ability to simulate real situations.Thirdly, the network assumes that the zero displacement boundary conditionis known. This is usually not the case in a real intra-operative setting, where onlyrough assumptions can be made. Our experience shows, however, that the modelis robust to changes in size of the zero displacement area.We also assume that surface correspondences between the pre- and intra-operative meshes are known. In the example of laparoscopic liver surgery, find-ing correspondences is far from trivial due to smooth, textureless surfaces andthe large deformations. One approach would be to compute an initial non-rigidregistration using a method such as [27], which matches the two surfaces with-out needing to compute features. Subsequent real-time tracking of intra-operativesurface features [7] could be used to keep the correspondences up to date. To cir-cumvent the difficulty of finding low-level geometric and texture features, currentresearch focuses on using high-level cues - such as anatomical landmarks, the sil-houette of the organ in the laparoscopic camera image and shading - to update abiomechanical model [1,12]. Since these approaches compute displacement vectorsto update their biomechanical model iteratively until they find a good registration,a method such as ours could be incorporated directly into a similar registrationapproach.
In this work, we employ a novel, data-driven model to estimate displacementinformation which is usually calculated using highly specialized hand-engineeredmodels. Due to the high speed, our method opens up many new possibilities in real-time applications. Besides being used in laparoscopic navigation, it could easily be extended to tackle brain-shift in neurosurgery or could potentially be usedin motion-compensation during radio-therapy. The method could also be used toestimate unknown boundary conditions and material parameters.We have shown that a data-driven model can be used to model soft tissuedeformation without seeing patient-specific data during the training phase. Themodel’s accuracy is similar to other models, it is robust to simplifications in itsinput and it is very fast.
The authors declare that they have no conflict of interest.This article does not contain any studies with human participants or animalsperformed by any of the authors.
References
1. Adagolodjo, Y., Trivisonne, R., Haouchine, N., Cotin, S., Courtecuisse, H.: Silhouette-based Pose Estimation for Deformable Organs Application to Surgical Augmented Reality.In: IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems.Vancouver, Canada (2017)2. Allard, J., Courtecuisse, H., Faure, F.c.: Implicit FEM Solver on GPU for InteractiveDeformation Simulation. In: W.m.W. Hwu (ed.) GPU Computing Gems Jade Edition.Elsevier (2011)3. Bui, H.P., Tomar, S., Chouly, F., Lozinski, A., Bordas, S.: Real-time Patient SpecificSurgical Simulation using Corotational Cut Finite Element Method: Application to NeedleInsertion Simulation. In: 13th World Congress in Computational Mechanics. New York,United States (2018)4. Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Imageand Vision Computing (3) (1992)5. Diederik P. Kingma, J.L.B.: Adam: A Method for Stochastic Optimization. In: Interna-tional Conference on Learning Representations (ICLR), San Diego. Ithaca (2015)6. Geuzaine, C., Remacle, J.F.: Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities. International Journal for Numerical Methods in Engineering (11), 1309–1331 (2009)7. Giannarou, S., Visentini-Scarzanella, M., Yang, G.: Probabilistic Tracking of Affine-Invariant Anisotropic Regions. IEEE Transactions on Pattern Analysis and MachineIntelligence (1) (2013)8. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016)9. Guo, X., Li, W., Iorio, F.: Convolutional Neural Networks for Steady Flow Approxima-tion. In: Proceedings of the 22Nd ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining, KDD ’16. ACM (2016)10. Hambli, R., Chamekh, A., Salah, H.B.H.: Real-time deformation of structure using finiteelement and Neural networks in virtual reality applications. Finite Elements In AnalysisAnd Design (2006)11. Kenngott, H.G., W¨unscher, J.J., Wagner, M., Preukschas, A., Wekerle, A.L., Neher, P.,Suwelack, S., Speidel, S., Nickel, F., Oladokun, D., Maier-Hein, L., Dillmann, R., Meinzer,H.P., M¨uller-Stich, B.P.: OpenHELP (Heidelberg laparoscopy phantom): development ofan open-source surgical evaluation and training tool. Surgical Endoscopy (2015)12. Koo, B., ¨Ozg¨ur, E., Le Roy, B., Buc, E., Bartoli, A.: Deformable Registration of a Preop-erative 3D Liver Volume to a Laparoscopy Image Using Contour and Shading Cues. In:Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. SpringerInternational Publishing (2017)earning Soft Tissue Behavior 1313. Lorente, D., Mart´ınez-Mart´ınez, F., Rup´erez, M.J., Lago, M.A., Mart´ınez-Sober, M.,Escandell-Montero, P., Mart´ınez-Mart´ınez, J.M., Mart´ınez-Sanchis, S., Serrano-L´opez,A.J., Monserrat, C., Mart´ın-Guerrero, J.D.: A framework for modelling the biomechanicalbehaviour of the human liver during breathing in real time using machine learning. ExpertSystems with Applications (2017)14. Malinen, M., R˚aback, P.: Elmer Finite Element Solver for Multiphysics and MultiscaleProblems, pp. 101–113. Multiscale Modelling Methods for Applications in Materials Sci-ence. Forschungszentrum J¨ulich (2013)15. Mart´ınez-Mart´ınez, F., Rup´erez-Moreno, M.J., Mart´ınez-Sober, M., Solves-Llorens, J.A.,Lorente, D., Serrano-L´opez, A.J., Mart´ınez-Sanchis, S., Monserrat, C., Mart´ın-Guerrero,J.D.: A finite element-based machine learning approach for modeling the mechanical be-havior of the breast tissues under compression in real-time. Computers in Biology andMedicine (2017)16. Mendizabal, A., Duparc, R.B., Bui, H.P., Paulus, C.J., Peterlik, I., Cotin, S.: Face-basedsmoothed finite element method for real-time simulation of soft tissue. Proc.SPIE (2017)17. Misra, S., Macura, K.J., Ramesh, K.T., Okamura, A.M.: The importance of organ geome-try and boundary constraints for planning of medical interventions. Medical Engineering& Physics (2) (2009)18. Morooka, K., Chen, X., Kurazume, R., Uchida, S., Hara, K., Iwashita, Y., Hashizume, M.:Real-Time Nonlinear FEM with Neural Network for Simulating Soft Organ Model Defor-mation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI2008. Springer Berlin Heidelberg (2008)19. Myronenko, A., Song, X., ´A. Carreira-Perpi˜n´an, M.: Non-rigid point set registration: Co-herent Point Drift. Advances in neural information processing systems (2006)20. Peterlik, I., Courtecuisse, H., Rohling, R., Abolmaesumi, P., Nguan, C., Cotin, S., E Sal-cudean, S.: Fast Elastic Registration of Soft Tissues under Large Deformations. MedicalImage Analysis (2017)21. Peterlik, I., Haouchine, N., Ruˇcka, L.s., Cotin, S.: Image-driven Stochastic Identificationof Boundary Conditions for Predictive Simulation. In: 20th International Conference onMedical Image Computing and Computer Assisted Intervention. Qu´ebec, Canada (2017)22. Plantef`eve, R., Peterlik, I., Haouchine, N., Cotin, S.: Patient-Specific Biomechanical Mod-eling for Guidance During Minimally-Invasive Hepatic Surgery. Annals of BiomedicalEngineering (1) (2016)23. Rechowicz, K.J., McKenzie, F.D.: Development and validation methodology of the Nussprocedure surgical planner. SIMULATION (12) (2013)24. Reichard, D., H¨antsch, D., Bodenstedt, S., Suwelack, S., Wagner, M., Kenngott, H., M¨uller-Stich, B., Maier-Hein, L., Dillmann, R., Speidel, S.: Projective biomechanical depth match-ing for soft-tissue registration in ... International Journal of Computer Assisted Radiologyand Surgery (IJCARS) (7) (2017)25. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for BiomedicalImage Segmentation. In: Medical Image Computing and Computer-Assisted Intervention(MICCAI), LNCS , vol. 9351. Springer (2015)26. Simpson, A.L., Dumpuri, P., Jarnagin, W.R., Miga, M.I.: Model-Assisted Image-GuidedLiver Surgery Using Sparse Intraoperative Data. Springer Berlin Heidelberg, Berlin, Hei-delberg (2012)27. Suwelack, S., R¨ohl, S., Bodenstedt, S., Reichard, D., Dillmann, R., dos Santos, T., Maier-Hein, L., Wagner, M., W¨unscher, J., Kenngott, H., M¨uller, B.P., Speidel, S.: Physics basedshape matching for intraoperative image guidance. Medical Physics (41) (2014)28. Suwelack, S., Talbot, H., R¨ohl, S., Dillmann, R., Speidel, S.: A biomechanical liver modelfor intraoperative soft tissue registration. Progress in Biomedical Optics and Imaging -Proceedings of SPIE (2011)29. Tonutti, M., Gras, G., Yang, G.Z.: A machine learning approach for real-time modellingof tissue deformation in image-guided neurosurgery. Artificial Intelligence in Medicine (2017)30. Wu, J., Westermann, R., Dick, C.: Real-Time Haptic cutting of high-resolution soft tissues.Studies in health technology and informatics (2014)31. Yamamoto, U., Nakao, M., Ohzeki, M., Matsuda, T.: Deformation estimation of an elasticobject by partial observation using a neural network. CoRR abs/1711.10157abs/1711.10157