Lagrangian Neural Style Transfer for Fluids
Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, Barbara Solenthaler
LLagrangian Neural Style Transfer for Fluids
BYUNGSOO KIM,
ETH Zurich
VINICIUS C. AZEVEDO,
ETH Zurich
MARKUS GROSS,
ETH Zurich
BARBARA SOLENTHALER,
ETH Zurich
Fig. 1. Our Lagrangian neural style transfer enables novel artistic manipulations, such as time-coherent stylization of smoke, multiple fluids and liquids.
Artistically controlling the shape, motion and appearance of fluid simulationspose major challenges in visual effects production. In this paper, we presenta neural style transfer approach from images to 3D fluids formulated in aLagrangian viewpoint. Using particles for style transfer has unique benefitscompared to grid-based techniques. Attributes are stored on the particlesand hence are trivially transported by the particle motion. This intrinsicallyensures temporal consistency of the optimized stylized structure and notablyimproves the resulting quality. Simultaneously, the expensive, recursivealignment of stylization velocity fields of grid approaches is unnecessary,reducing the computation time to less than an hour and rendering neuralflow stylization practical in production settings. Moreover, the Lagrangianrepresentation improves artistic control as it allows for multi-fluid stylizationand consistent color transfer from images, and the generality of the methodenables stylization of smoke and liquids likewise.CCS Concepts: •
Computing methodologies → Physical simulation ; Neural networks .Additional Key Words and Phrases: physically-based animation, fluid simu-lation, deep learning, neural style transfer
Authors’ addresses: Byungsoo Kim, ETH Zurich, [email protected]; Vinicius C.Azevedo, ETH Zurich, [email protected]; Markus Gross, ETH Zurich,[email protected]; Barbara Solenthaler, ETH Zurich, [email protected] to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].© 2020 Association for Computing Machinery.0730-0301/2020/7-ART1 $15.00https://doi.org/10.1145/3386569.3392473
ACM Reference Format:
Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, and Barbara Solenthaler.2020. Lagrangian Neural Style Transfer for Fluids.
ACM Trans. Graph.
39, 4,Article 1 (July 2020), 10 pages. https://doi.org/10.1145/3386569.3392473
In visual effects production, physics-based simulations are not onlyused to realistically re-create natural phenomena, but also as a toolto convey stories and trigger emotions. Hence, artistically control-ling the shape, motion and the appearance of simulations is essentialfor providing directability for physics. Specifically to fluids, the ma-jor challenge is the non-linearity of the underlying fluid motionequations, which makes optimizations towards a desired target dif-ficult. Keyframe matching either through expensive fully-optimizedsimulations [McNamara et al. 2004; Pan and Manocha 2017; Treuilleet al. 2003] or simpler distance-based forces [Nielsen and Bridson2011; Raveendran et al. 2012] provide control over the shape offluids. The fluid motion can be enhanced with turbulence synthesisapproaches [Kim et al. 2008; Sato et al. 2018] or guided by coarsegrid simulations [Nielsen and Bridson 2011], while patch-based tex-ture composition [Gagnon et al. 2019; Jamriška et al. 2015] enablesmanipulation over appearance by automatic transfer of input 2Dimage patterns.The recently introduced Transport-based Neural Style Transfer(TNST) [Kim et al. 2019a] takes flow appearance and motion controlto a new level: arbitrary styles and semantic structures given by2D input images are automatically transferred to 3D smoke simula-tions. The achieved effects range from natural turbulent structures
ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. a r X i v : . [ c s . G R ] M a y :2 • B. Kim, V. C. Azevedo, M. Gross, B. Solenthaler 𝜕ℒ/𝜕𝐼 𝜃 𝜕𝐼 𝜃 /𝜕𝑑 ∗ 𝒑 𝑠 for style transfer 𝑑 ∗ ℐ 𝑝2𝑔 (𝐱 ° , 𝜆 ° , ℎ, 𝐱 ⊞ ) 𝒑 𝑐 for semantic transfer 𝑳 𝒐 𝒔𝒔 𝑵 𝒆 𝒕 𝒘 𝒐 𝒓 𝒌 ℒ ℛ 𝜃 (𝑑 ∗ ) 𝐼 𝜃 𝜕𝑑 ∗ /𝜕𝜆 ° (or 𝜕𝐱 ° ) …… 𝐱 ⊞ ∈ ℝ 𝐷×𝐻×𝑊 𝐱 ° ∈ ℝ 𝑁×3 , 𝜆 ° ∈ ℝ 𝑁×𝑚 + +
Upscale to 𝑛𝐷 × 𝑛𝐻 × 𝑛𝑊
Multi-scale Density Reconstruction
Fig. 2. Overview of our LNST method. We optimize particle positions x ◦ and attributes λ ◦ to stylize a given density field d ∗ . We transfer information fromparticles to the grid with the splatting operation I p д , and jointly update loss functions and attributes. The black arrows show the direction of the feed-forwardpass to the loss network L , and the gray arrows indicate backpropagation for computing gradients. For grid-based simulation inputs, we sample and re-simulateparticles in a multi-scale manner (Algorithm (1)). to complex artistic patterns and intricate motifs. The method ex-tends traditional image-based Neural Style Transfer [Gatys et al.2016] by reformulating it as a transport-based optimization. Thus,TNST is physically inspired, as it computes the density transportfrom a source input smoke to a desired target configuration, al-lowing control over the amount of dissipated smoke during thestylization process. However, TNST faces challenges when dealingwith time coherency due to its grid-based discretization. The veloc-ity field computed for the stylization is performed independentlyfor each time step, and the individually computed velocities arerecursively aligned for a given window size. Large window sizes arerequired, rendering the recursive computation expensive while stillaccumulating inaccuracies in the alignment that can manifest asdiscontinuities. Moreover, transport-based style transfer is only ableto advect density values that are present in the original simulation,and therefore it does not inherently support color information orstylizations that undergo heavy structural changes.Thus, in this work, we reformulate Neural Style Transfer in a La-grangian setting (see Figure 2), demonstrating its superior propertiescompared to its Eulerian counterpart. In our Lagrangian formula-tion, we optimize per-particle attributes such as positions, densitiesand color. This intrinsically ensures better temporal consistencyas shown for example in Figure 3, eliminating the need for theexpensive recursive alignment of stylization velocity fields. The La-grangian approach reduces the computational cost to enforce timecoherency, increasing the speed of results from one day to a singlehour. The Lagrangian Style transfer framework is completely obliv-ious to the underlying fluid solver type. Since the loss function isbased on filter activations from pre-trained classification networks,we transfer the information back and forth from particles to thegrids, where loss functions and attributes can be jointly updated.We propose regularization strategies that help to conserve the massof the underlying simulations, avoiding oversampling of styliza-tion particles. Our results demonstrate novel artistic manipulations,such as stylization of liquids, color stylization, stylization of multiplefluids, and time-varying stylization. Fig. 3. Neural color stylization [Christen et al. 2019] using the input
RedCanna applied to a smoke scene with TNST (top) and LNST (bottom). Theclose-up views (dashed box, frames 60 and 66) reveal that LNST is moretime-coherent than TNST (dashed circle).
Lagrangian Fluids have become popular for simulating incompress-ible fluids and interactions with various materials. Since the intro-duction of SPH to computer graphics [Desbrun and Gascuel 1996;Müller et al. 2003], various extensions have been presented thatmade it possible to efficiently simulate millions of particles on asingle desktop computer. Accordingly, particle methods reached anunprecedented level of visual quality, where fine-scale surface effectsand flow details are reliably captured. To enforce incompressibility,the original state equation based method [Becker and Teschner 2007;Monaghan 2005] has been replaced by pressure Poisson equation(PPE) solvers using either a single source term for density invari-ance [Ihmsen et al. 2014; Solenthaler and Pajarola 2009] or two PPEsto additionally account for divergence-free velocities [Bender andKoschier 2015]. Solvers closely related to PPE have been presented,such as Local Poisson SPH [He et al. 2012], Constraint Fluids [Servin
ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. agrangian Neural Style Transfer for Fluids • 1:3 et al. 2012] and Position-based Fluids [Macklin and Mueller 2013].Boundary handling is computed with particle-based approachesthat sample boundary geometry (e.g. [Gissler et al. 2019]) or implicitmethods that typically use a signed distance field (e.g. [Koschier andBender 2017]). Extensions include highly viscous fluids (e.g. [Peeret al. 2015]), and multiple phases and fluid mixing (e.g. [Ren et al.2014]). An overview of recent developments in SPH can be found inthe course notes of Koschier et al. [2019].
Hybrid Lagrangian-Eulerian Fluids combine the versatility of theparticles representation to track transported quantities with thecapacity of grids to enforce incompressibility. Among popular ap-proaches, the Fluid Implicit Particle Method (FLIP) [Brackbill et al.1988] was first employed in graphics to animate sand and water[Zhu and Bridson 2005]. Due to its ability to accurately capturesub-grid details it has been widely adopted for liquid simulations,being extended to animation of turbulent water [Kim et al. 2006],coupled with SPH for modelling small scale splashes [Losasso et al.2008], improved for efficiency [Ando et al. 2013; Ferstl et al. 2016],used in fluid control [Pan et al. 2013], and enhanced with betterparticle distribution [Ando and Tsuruno 2011; Um et al. 2014]. TheMaterial Point Method (MPM) [Stomakhin et al. 2013] was used tosimulate a wide class of solid materials [Jiang et al. 2016]. Recentwork on hybrid approaches extended the information tracked by theparticles by affine [Jiang et al. 2015] and polynomial [Fu et al. 2017]transformations. For a thorough discussion of hybrid continuummodels, we refer to Hu et al. [2019b].
Patch-based Appearance Transfer methods compute similaritiesbetween source and target datasets in local neighborhoods, mod-ifying the appearance of the source by transferring best-matchedfeatures from the target dataset. Kwatra et al. [2005] employ lo-cal similarity measures in an energy-based optimization, enablingtexture patches animated by flow fields. This approach was fur-ther extended to liquid surfaces [Bargteil et al. 2006; Kwatra et al.2006], and improved by modifying the texture based on visuallysalient features of the liquid mesh [Narain et al. 2007]. Jamriška et al.[2015] improved previous work with better temporal coherency andmatching precision for obtaining high-quality 2D textured fluids.Texturing liquid simulations was also implemented in a Lagrangianframework by using individually tracked surface patches [Gagnonet al. 2016, 2019; Yu et al. 2011]. Image and video-based approachesalso take inspiration from fluid transport. Bousseau et al. [2007]proposed a bidirectional advection scheme to reduce patch distor-tions. Regenerative morphing and image melding techniques werecombined with patch-based tracking to produce in-betweens forartist-stylized keyframes [Browning et al. 2014]. Recent advancesin patch-based appearance transfer often rely on evaluating theunderlying 3D geometric information; examples include improvingtemplate matching by a novel similarity measure [Talmi et al. 2017],patch matching for illumination effects [Fišer et al. 2016], extensionsto texture mapping [Bi et al. 2017] and intricate texture motifs [Dia-manti et al. 2015]. While these approaches were successful in 2Dsettings and for texturing liquids, they cannot inherently support3D volumetric data.
Velocity Synthesis methods augment flow simulations with veloc-ity fields, which manipulate or enhance volumetric data. Due to the inability of pressure-velocity formulations to properly conserve dif-ferent energy scales of flow phenomena, sub-grid turbulence [Kimet al. 2008; Narain et al. 2008; Schechter and Bridson 2008] wasmodelled for better energy conservation. These approaches wereextended to model turbulence in the wake of solid boundaries [Pfaffet al. 2009], liquid surfaces [Kim et al. 2013] and example-basedturbulence synthesis [Sato et al. 2018]. In order to merge fluidsof different simulation instances [Thuerey 2016] or separated byvoid regions [Sato et al. 2018], velocity fields where synthesized bysolving an unconstrained energy minimization problem. Lastly, theTransport-based Neural Style Transfer (TNST) [Kim et al. 2019a]can also be seen as a velocity synthesis method: at each time-step,the method optimizes a velocity field that transports the smoketowards a desired stylization.
Machine Learning & Fluids was first introduced to graphics byLadický et al. [2015]. They used Regression Forests to predict posi-tions of fluid particles over time, resulting in a substantial perfor-mance gain compared to traditional Lagrangian solvers. CNN-basedarchitectures were employed in Eulerian-based solvers to substi-tute the pressure projection step [Tompson et al. 2017; Yang et al.2016] and to synthesize flow simulations from a set of reduced pa-rameters [Kim et al. 2019b]. An LSTM architecture [Wiewel et al.2019] predicted changes on pressure fields for multiple subsequenttime-steps, speeding up the pressure projection step. Differentiablefluid solvers [Holl et al. 2020; Hu et al. 2020, 2019a; Schenck andFox 2018] have been introduced that can be automatically coupledwith deep learning architectures and provide a natural interface forimage-based applications. Patch-based [Chu and Thuerey 2017] andGAN-based [Xie et al. 2018] fluid super-resolution enhance coarsesimulations with rich turbulence details, while also being compu-tationally inexpensive. While these approaches produce detailed,high-quality results, they do not support transfer of arbitrary smokestyles.
Differentiable Rendering and Stylization is used in Neural StyleTransfer algorithms to transfer the style of a source image to a tar-get image by matching features of a pre-trained classified network[Gatys et al. 2016]. However, stylizing 3D data requires a differen-tiable renderer to map the representation to image space. Loper andBlack [2014] proposed the first fully differentiable renderer withautomatically computed derivatives, while a novel differentiable vol-ume sampling was implemented by Yan et al. [2016]. Raster-baseddifferentiable rendering for meshes for stylization with approxi-mate [Kato et al. 2018] and analytic [Liu et al. 2018] derivativeswas proposed to approximate visibility changes and mesh filters,respectively. A cubic stylization algorithm [Liu and Jacobson 2019]was implemented by minimizing a constrained energy formulationand employed to mesh stylization. Closer to our work, Kim et al.[2019a] defines an Eulerian framework for a transport-based neuralstyle transfer of smoke. Their approach computes individually styl-ized velocity fields per-frame, and temporal coherence is enforcedby aligning subsequent stylization velocity fields and performingsmoothing. We compare the Eulerian approach with our method inthe subsequent sections. For an overview on differentiable renderingand neural style transfer we refer to Yifan et al. [2019] and Jing etal. [2019], respectively.
ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. :4 • B. Kim, V. C. Azevedo, M. Gross, B. Solenthaler
We briefly review previous Eulerian-based TNST [Kim et al. 2019a]for completeness and to better compare against our novel Lagrangianapproach. Transport-Based Neural Style Transfer (TNST) extendsthe original NST algorithm to transfer the style of a given image to aflow-based 3D smoke density. As opposed to NST where individualpixels of the target image are optimized, TNST optimizes a velocityfield that modifies density values through indirect smoke transport.The velocity field ˆ v that stylizes the input density d is defined by aloss function L computed from a pre-trained image classificationCNN by ˆ v = arg min v (cid:213) θ ∈ Θ L(R θ (T ( d , v )) , p ) , (1)where T is a transport function that advects d with v , generatingthe stylized density ˆ d = T ( d , ˆ v ) ; R is a differentiable renderer con-verting the density field to image-space for a specific view θ by I = R θ ( ˆ d ) , and p denotes the set of user-defined parameters usedin the stylization process. The velocity field contributions are in-dividually computed per view, resulting in a 3D volumetric smokestylization. While the authors separate the velocity field into itsirrotational and incompressible parts which can be optimized inde-pendently, we omit this here for simplicity.The loss function is subdivided into semantic and style losses foradditional control over artistic stylization given a rendered densityfield. Style transfer considers an input image and user-selectedactivation layers (levels of features), while semantic transfer selectsa CNN layer with desirable attributes that will be transferred tothe target stylized smoke. Since the smoke is advected towards atarget objective, this guarantees that the original smoke shape andsemantics is enforced without matching its original content loss, asin traditional NST algorithms [Gatys et al. 2015]. For simplicity, werestrict our discussion to the style loss, which is given by L s ( I , p s ) = L (cid:213) l (cid:34) C l ( H l × W l ) C l (cid:213) m , n (cid:16) G lmn ( I ) − G lmn ( I s ) (cid:17) (cid:35) , (2)where the Gram matrix G computes correlations between differentfilter responses. The Gram matrix is calculated for a given layer l and two channels m and n , by iterating over all pixels of the flattened1-D feature map ˆ F l ( I ) as G lmn ( I ) = H l × W l (cid:213) i ˆ F lmi ( I ) ˆ F lni ( I ) . (3)Extending the single frame stylization in a time-coherent fashionis expensive and inaccurate when computed in an Eulerian frame-work. TNST aligns stylization velocities by recursively advectingthem with the simulation velocities for a given window size asshown in Figure 4. The recursive nature renders this computationinefficient time- and memory-wise, especially when large windowsizes are employed to enable smooth transitions between consecu-tive frames. Due to the large memory requirement, this operationoften has to be computed on the CPU, which generates additionaloverhead by the use of expensive data transfer operations. Fig. 4. Recursive temporal alignment in TNST. For a window size w , ( w − )/ recursive temporal alignment steps are performed for each stylizationvelocity ˆ v . Colors indicate the distance to frame t , and arrows refer toadvection steps (with recursive steps shown as dashed lines). In contrast to its Eulerian counterpart, the Lagrangian representa-tion uses particles that carry quantities such as the position, den-sity and color value. Neural style transfer methods compute lossfunctions based on filter activations from pre-trained classificationnetworks, which are trained on image datasets. Thus, we have totransfer the information back and forth from particles to the grids,where loss functions and attributes can be jointly updated. Wetake inspiration from hybrid Lagrangian-Eulerian fluid simulationpipelines that use grid-to-particle I д p and particle-to-grid I p д transfers as λ ◦ = I д p ( x ◦ , λ + ) and λ + = I p д ( x ◦ , λ ◦ , h , x + ) , (4)where λ ◦ and λ + are attributes defined on the particle and grid,respectively, x ◦ refers to all particle positions, x + are grid nodesto which values are transferred, and h is the support size of theparticle-to-grid transfer.Our grid-to-particle transfer employs a regular grid cubic inter-polant, while the particle-to-grid transfer uses standard radial basisfunctions. Regular Cartesian grids facilitate finding grid verticesaround an arbitrary particle position. For this, we extended a differ-entiable point cloud projector [Insafutdinov and Dosovitskiy 2018]to arbitrary grid resolution, neighborhood size and custom kernelfunctions. Given all the neighboring particles j ∈ ∂ Ω x around a gridnode x , a grid attribute λ + is computed by summing up weightedparticle contributions as λ + ( x ) = (cid:205) j ∈ ∂ Ω x λ ◦ j W (|| x − x j ◦ || , h ) (cid:205) j ∈ ∂ Ω x W (|| x − x j ◦ || , h ) , (5)where we chose W to be the cubic B-spline kernel, which is alsooften used in SPH simulations [Monaghan 2005]: W ( r , h ) cubic = − r + r , ≤ r ≤ , ( − r ) , ≤ r ≤ , , r > . (6)We now have all the necessary elements to convert the previousEulerian style transfer (Equation (1)) into a Lagrangian framework.Given a set of Lagrangian attributes Λ ◦ , the optimization objectivefor a single frame isˆ Λ ◦ = arg min Λ ◦ (cid:213) θ ∈ Θ (cid:213) λ ◦ ∈ Λ ◦ w λ ◦ L(R θ (I p д ( x ◦ , λ ◦ ) , p ) , (7)where w λ ◦ are weights for the losses that include Lagrangian at-tributes. In case of particle position x ◦ given as the target quantity λ ◦ , we use the SPH density I p д ( x ◦ ) = (cid:205) j ∈ ∂ Ω x m j W (|| x − x ◦ j || , h ) ,where m j represents the mass of the j -th particle [Bender 2016]. Notethat our losses are evaluated similarly as in the original Eulerian ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. agrangian Neural Style Transfer for Fluids • 1:5 method, since the gradients computed in image-space also mod-ify grid values ( λ + ). However, these gradients are automaticallypropagated back to the particles by auto-differentiating the particle-to-grid I p д function. Thus, our method only reformulates the do-main of the optimization, sharing the same stylization possibilities(semantic and content transfers) as in the original TNST.Since the Lagrangian optimization is completely oblivious to theunderlying solver type, the chosen attributes for creating styliza-tions can be arbitrarily combined, enabling a wide range of artisticmanipulations in different scene setups. We outline two strategiesand demonstrate their impact on the stylization. The first one is par-ticularly suitable for participating volumetric data, which are oftensimulated with grid-based solvers. It involves optimizing a scalarvalue carried by the Lagrangian stylization particles by Equation (7).For most of our smoke scenes, this scalar value is the density, thoughit can also be the color or emission. The regularization term L( λ ◦ ) ρ = ( (cid:213) ∆ λ ◦ ) − (cid:213) log || ∆ λ ◦ || (8)reinforces the conservation of the original amount of smoke. Itminimizes the total net smoke change, preventing the stylization toundesirably fade out particles and keeping changes non-zero by min-imizing cross-entropy loss at the same time. Figure 5 demonstratesthe impact of different regularizer weights. Fig. 5. Different weights for the density regularization show the trade-offbetween pronounced structures and conservation of mass. The images onthe left show results with zero, low, and high weights, respectively, and theright image is the ground truth.
The second strategy is suitable if the underlying fluid solver isparticle-based or hybrid, which is often the case for liquids. For thesesimulations, we can define particle position displacements as theoptimized Lagrangian attributes. However, generating stylizationsby modifying particle displacements may cause cluttering or regionswith insufficient particles. The regularization penalizes irregulardistribution of particle positions and is defined as L( x ◦ ) ∆ x = ||I p д ( x ◦ ) − ρ + || , (9)where ρ + corresponds to the rest density for cells that contain parti-cles, and is zero otherwise. Note that Equation (9) does not accountfor the particle deficiency near fluid surfaces. This could be ad-dressed by adding virtual particles [Schechter and Bridson 2008] orapplying (variants of) the Shepard correction to the kernel function[Reinhardt et al. 2019]. We show the impact of this regularizer onthe particle sampling in Figure 6, highlighting the trade-off betweenuniform distribution and stylization strength. Fig. 6. Different weights for the position regularization show the trade-offbetween pronounced structures and uniform sampling. The images on theleft show results with zero, low, and high weights, respectively, and the rightimage is the ground truth.
We notice that both regularizations in Equation (8) and Equa-tion (9) are different incarnations of the mass conservation propertycommonly used in fluid simulations. In TNST, mass conservationis enforced by decomposing the stylization velocities into theirirrotational and incompressible parts, which can be optimized inde-pendently. Both techniques enable a high degree of artistic controlover the content manipulation.
If the input is a grid-based simulation, we have to sample and re-simulate particles. We can use a sparse representation with onlyone particle per voxel, in constrast to hybrid liquid simulations thatusually sample 8 particles per voxel to properly capture momentumconservation [Zhu and Bridson 2005]. Combining a low numberof particles with a position integration algorithm that accumulateserrors over time will yield irregularly distributed particles [Andoand Tsuruno 2011]. This manifests in a rendered image as smokewith overly dense or void regions. We therfore solve the followingoptimization problemˆ x ◦ , ˆ ρ ◦ = arg min x ◦ , ρ ◦ (cid:213) t ||I p д ( x ◦ t , ρ ◦ t ) − ρ + t || . (10)The optimization problem presented above is not only severelyunder-constrained but also has a time-varying objective term, andoptimizing for Equation (10) is challenging if tackled jointly forboth particle positions x ◦ and densities ρ ◦ . Thus, we use a heuris-tic approach for solving this optimization, subdividing it into twosteps, position optimization and multi-scale density update (Sec-tion 4.1.1). Firstly, we minimize the irregular distribution of particlepositions by employing a position-based update, optimizing particledistributions using Equation (9) as objective. The distribution ofthe particles is optimized per frame and serves as an input for opti-mizing subsequent frames, enabling temporally coherent positionupdates. Equation (9) can be automatically computed by our fullydifferentiable pipeline. In addition to the positionupdate, we also compute smoke densities individually carried by theparticles to further eliminate small gaps that may appear due to thesparse discretization, further enhancing the solution of Equation (10).Owing to the low number of sampled particles and the mismatches
ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. :6 • B. Kim, V. C. Azevedo, M. Gross, B. Solenthaler (a) (b) (c)(d) (e) (f)
Fig. 7. Comparison of different re-simulation strategies. (a): ground truthdensity, (b): constant density carried by particles, (c): (b) with redistributionby Equation (9), (d): single-scale sampled density, (e): (d) with redistribution,(f): multi-scale ( n s = ) sampled density with redistribution (final method). between grid and particle transfers, carrying a constant densitywill either produce grainy (Figure 7, (b)) or diffuse (Figure 7, (c))volumetric representations, depending on if particle re-distribution(Equation (9)) is applied or not. A simple approach is to interpolatedensity values directly from the grid over time. Larger kernel sizescould be used to remedy sparse sampling, but would excessivelysmooth structures and degrade quality.We take inspiration from Laplacian pyramids, where distinctgrid resolution levels are treated separately. In our case, we com-pute residuals of different support kernel sizes of the particle-to-grid transfer. This efficiently captures both low- and high-frequencyinformation, covering potentially empty smoke regions while alsoproviding sharp reconstruction results. The residual computationof kernels of varying support sizes is synergistically coupled withmatching grid resolutions, which creates an efficient multi-scalerepresentation of the smoke.The multi-scale reconstruction works as follows: we first sam-ple grid densities to the particles. This represents the smoke low-frequency information, which we interpolate to the particle variables ρ ◦ . The variables above the first level (e.g., ρ ◦ , ρ ◦ ) will carry residualinformation computed between subsequent levels. The Lagrangianrepresentations vary between each level because they perform grid-to-particle transfers with progressively reduced kernel support sizes.To compare residuals between Lagrangian representations, we makeuse of particle-to-grid transfers, which act as a low-pass filter, simi-larly to blurring operations of Laplacian pyramids. This process isperformed until the original grid resolution is matched. Our multi-scale density representation is summarized in Algorithm (1). Figure 7illustrates the impact of using a single scale without (d) and with(e) particle re-distribition (Equation (9)). The multi-scale result withre-distribution (f), which corresponds to our final method, has a higher PSNR (31.89) than its single-scale counterpart (31.39) and isvery close to the ground truth (a). Algorithm 1:
Multi-scale Density Reconstruction
Data:
Particle positions x ◦ optimized by Equation (9)Original grid-based smoke simulation ρ + Grid node positions x + Coarsest support kernel radius r Number of pyramid subdivisions n s Result:
Multi-scale residual density ρ ◦ stored on particles ρ ◦ ← I д p ( x ◦ , ρ + ) ρ + ∗ ← I p д ( x ◦ , ρ ◦ , r , x + ) for i ← to n s do ρ + ∗ ← ρ + − ρ + ∗ ρ ◦ i ← I д p ( x ◦ , ρ + ∗ ) r ← r ρ + ∗ ← ρ + ∗ + I p д ( x ◦ , ρ ◦ i , r , x + ) end The major advantage of our Lagrangian discretization is the in-expensive enforcing of temporal coherency. Since quantities arecarried individually per particle, it is intrinsically simple to trackhow attributes change over time. Neural style gradients are com-puted on the grid and need to be updated once the neighborhoodof a particle changes. To ensure smooth transitions, we apply aGaussian filter over the density changes of a particle, as shown inFigure 8. Besides being sensitive to density neighborhood changes,stylization gradients are also influenced by the density carried bythe particle itself (Section 4.1.1).
Density Value
Stylization Gradients
Smoothed Gradients 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 Fig. 8. Particle density (circles) variation for a single particle over time.Temporal coherency is enforced by smoothing density gradients used forstylization from adjacent frames.
To further improve efficiency, and in contrast to TNST, we cankeyframe stylizations, i.e., apply stylization to keyframes and inter-polate particle attributes in-between. In practice, we reduced thestylization frames by a factor of 2 at max, but more drastic approx-imations could be used. Sparse keyframes still show temporallysmooth transitions, but quality is degraded. Nevertheless, sparsekeyframing would still be useful for generating quick previews ofthe simulation. The impact of sparse keyframing (every 10 frames)is shown in Figure 9.
ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. agrangian Neural Style Transfer for Fluids • 1:7
Fig. 9. Stylization of every frame (left three images) versus keyframedstylization every 10 frames (images on the right). Sparse keyframing isvisually similar and can be useful for quick previews.
We implemented the method with the tensorflow framework andcomputed results on a TITAN Xp GPU (12GB). We used mantaflow [Thuerey and Pfaff 2018] for smoke scene generation, a 3D smokedataset from Kim et al. [2019a] for comparisons with TNST, a2D smoke dataset from Jamriška et al. [2015] for color stylization,
SPlisHSPlasH [Bender 2016; Koschier et al. 2019] for liquid simula-tions and
Houdini for rendering.
Performance.
Using particles for stylization eliminates the needfor recursively aligning stylization velocities from subsequent frames,which notably improves the computational performance. In combi-nation with our sparse particle respesentation for smoke (1 particleper cell), simulations of size 200 × ×
200 can now be stylizedwithin an hour instead of a day (TNST). The computation time perframe is 0.66 minutes for the Smoke Jet scene shown in Figure 11,which is a speed-up of a factor of 20.41 compared to TNST. Thisimprovement allows artists to more easily test different referencestructures (input images) and hence renders neural flow styliza-tion better applicable in production environments. Table (1) givesan overview of the timings and parameters for the individual testscenes. Keyframing (every other frame) was applied to the SmokeJet (Figure 11) and Double Jets (Figure 12) examples.
Table 1. Performance table.
Scene Resolution
Particles Time (m/f)Moving Sphere (Fig. 10) × ×
192 237K 0.8
Smoke Jet (Fig. 11) × ×
200 1.2M 0.66
Double Jets (Fig. 12) × ×
200 2M/2M 0.45
Chocolate (Fig. 13) × ×
200 80K 0.05
Colored Smoke (Fig. 3) ×
800 136K 1.21
Dam Break (Fig. 14) × Double Dam (Fig. 15) × Time-coherency.
To illustrate the benefit of the Lagrangian for-mulation, we use a simple test scene where we initialize a smokesphere with a uniform density. We then move the smoke artificiallyto the right, and apply the neural stylization to every frame of thesequence. We compare the results of LNST and TNST for differenttime instances in Figure 10. The top row shows the results of TNST.It can be seen that TNST is not able to preserve constant stylizedtextures in regions where the density function does not change. Thisis due to the recursive alignment of stylization gradients, whichaccumulate errors especially for bigger window sizes. The second
Fig. 10. Selected frames of a stylized moving smoke sphere. From top tobottom: TNST with structures changing over time, LNST with temporallycoherent structures, LNST result with applied shearing, and LNST resultwith noise-added density inducing style variation over time. row shows the corresponding results with LNST, demonstratingconsistent stylization over time since gradients are constant. Alsowhen applied a shearing deformation to the sphere, as shown inthe third row, strucutures remain coherent. If an artist prefers tohave changing structures in such situations, noise can be addedto the densities carried by the particles, which in turn will inducestylization gradients as shown in the last row.
Smoke Stylization.
Figure 11 shows a direct comparison of LNSTand TNST applied to the smoke jet dataset of Kim et al. [2019a].While the resulting structures inherently depend on the under-lying representation, they naturally differ and cannot be directlycompared with each other. It can be observed, however, that theLagrangian stylization may lead to more pronounced structures,well visible in the semantic transfer net and the style transfer bluestrokes , and that boundaries are smoother, noticeable in the
SeatedNude example.
Multi-fluid Stylization.
Stylization of multiple fluids is naturallyenabled by stylizing different sets of particles with different inputimages. Figure 12 shows a simulation of two smoke jets colliding,where the left one is stylized with the semantic feature net andthe right one with the style transfer of the input image spirals .Transferred structures are retained per fluid type even if the flowundergoes complex mixing effects.
Stylization of Liquids.
We use a simple differentiable rendererfor stylization of liquids. Unlike smoke renderer, which integratesmedia radiance scattered in the medium, we compute the amountof diffused light, i.e., absorbed light except transmitted by its liquidvolume [Ihmsen et al. 2012], which is given by τ ( x , r ) = e − γ ∫ x0 d ( r ) dr I ij = − τ ( r max , r ) . (11) ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. :8 • B. Kim, V. C. Azevedo, M. Gross, B. Solenthaler
Fig. 11. Semantic transfer applied to the smokejet simulation of [Kim et al. 2019a] (leftmost column). Stylized results are shown for our LNST (top) and TNST(bottom) for semantic feature transfer net (second column) and input images blue strokes , Seated Nude , and fire (last three columns) .Fig. 12. Two colliding smoke jets, which are stylized individually with the semantic feature net and input image spirals . The Lagrangian representation enablescoherent stylization of multiple fluids even if the flow undergoes complex mixing. Figure 13 shows the results of a stylized SPH simulation computedwith
SPlisHSPlasH [Bender 2016]. We applied the patterns spiral and diagonal to a thin sheet simulation.
Fig. 13. Thin sheet SPH simulation computed with
SPlisHSPlasH [Bender2016] stylized with the patterns spiral and diagonal . Color Transfer.
We transfer color information from input imagesto flow fields by storing a color value per particle and optimizing itby Equation (7). This can be applied to any grid-based or particle-based smoke or liquid simulation. In Figure 14 we applied the colorstylization to a 2D dam break simulation using different exampleimages, and in Figure 15 to two liquids with distinct types (and hencecolor). The accompanying videos show that local color structureschange very smoothly over time, which is attributed to the improvedtime-coherency of the Lagrangian stylization. This is especially wellvisible in Figure 3, where two subsequent frames are shown forTNST and LNST. In this example, we have transferred the style bluestroke to a smoke scene. The close-up views reveal discontinuitiesfor TNST, while LNST shows smooth transitions for color structures. Image sources: http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html, https://github.com/byungsook/neural-flow-styleACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. agrangian Neural Style Transfer for Fluids • 1:9
Fig. 14. Lagrangian color stylization applied to a 2D particle-based liquid simulation using the input images
Kanagawa Wave , Red Canna and
Starry Night .Fig. 15. Lagrangian color stylization applied to a mixed
2D particle-based liquid simulation using the input images
Kanagawa Wave and fire . We have presented a Lagrangian approach for neural flow stylizationand have demonstrated benefits with respect to quality (improvedtemporal coherence), performance (stylization per frame in lessthan a minute), and art-directability (multi-fluid stylization, colortransfer, liquid stylization). A key property of our approach is thatit is not restricted to any particular fluid solver type (i.e., grids,particles, hybrid solvers). To enable this, we have introduced astrategy for grid-to-particle transfer (and vice versa) to efficientlyupdate attributes and gradients, and a re-simulation that can beeffectively applied to grid and particle fluid representations. Thisgenerality of our method facilitates seamless integration of neuralstyle transfer into existing content production workflows.A current limitation of the method is that we use a simple differen-tiable renderer for liquids. While this works well for some scenarios,a dedicated differentiable renderer for liquids would improve theresulting quality and especially also support a wider range of liquidsimulation setups. Similar to the smoke renderer, such a liquid ren-derer must be differentiable as gradients are back-propagated in theoptimization. It must also be efficient as the renderer is used in eachstep of the optimization. Although the complexity of the rendererhas a direct influence on the quality of the results, we suspect that,analogously to our smoke renderer, a lightweight renderer that canrecover the core flow structures is sufficient for stylizing liquids.We have shown that LNST enables novel effects and a high degreeof art-directability, which renders flow stylization more practical inprodcution workflows. However, we have not tested the method onlarge-scale simulations that are typically used in such settings. Whileour method can handle up to 2 million particles, larger scenes arerestricted by the available memory. Moreover, in practical settingsthe scene complexity is higher, which potentially poses challengeswith respect to artist control of the stylization.By reducing the computation time for stylizing an entire simula-tion from one day with TNST to a single hour with LNST renders the method much more practical for digital artists. However, for testingdifferent input structures, a real-time method would be desirable.Recent concepts presented on neural image stylization might bemapped to 3D simulations to further improve efficiency.
ACKNOWLEDGMENTS
The authors would like to thank Fraser Rothnie for his artistic con-tributions. The work was supported by the Swiss National ScienceFoundation under Grant No.: 200021_168997.
REFERENCES
Ryoichi Ando, Nils Thürey, and Chris Wojtan. 2013. Highly adaptive liquid simulationson tetrahedral meshes.
ACM Transactions on Graphics
32, 4 (jul 2013), 1.Ryoichi Ando and Reiji Tsuruno. 2011. A particle-based method for preserving fluidsheets. In
Proceedings of SCA’11 . 7. https://doi.org/10.1145/2019406.2019408Adam W Bargteil, Funshing Sin, Jonathan E Michaels, Tolga G Goktekin, and James FO’Brien. 2006. A Texture Synthesis Method for Liquid Animations. In
Proceedingsof SCA’06 . 345–351. http://dl.acm.org/citation.cfm?id=1218064.1218111Markus Becker and Matthias Teschner. 2007. Weakly compressible SPH for free surfaceflows. In
Symposium on Computer Animation . 1–8.Jan Bender. 2016. SPlisHSPlasH. https://github.com/InteractiveComputerGraphics/SPlisHSPlasH .Jan Bender and Dan Koschier. 2015. Divergence-Free Smoothed Particle Hydrodynamics.In
Symposium on Computer Animation . 1–9.Sai Bi, Nima Khademi Kalantari, and Ravi Ramamoorthi. 2017. Patch-based optimizationfor image-based texture mapping.
ACM ToG
36, 4 (jul 2017), 1–11.Adrien Bousseau, Fabrice Neyret, Joëlle Thollot, and David Salesin. 2007. Video water-colorization using bidirectional texture advection.
ACM ToG
26, 3 (2007).J.U. Brackbill, D.B. Kothe, and H.M. Ruppel. 1988. Flip: A low-dissipation, particle-in-cellmethod for fluid flow.
Computer Physics Communications
48, 1 (1988), 25–38.Mark Browning, Connelly Barnes, Samantha Ritter, and Adam Finkelstein. 2014. Stylizedkeyframe animation of fluid simulations. In
Proceedings of the Workshop on Non-Photorealistic Animation and Rendering . ACM, 63–70.Fabienne Christen, Byungsoo Kim, Vinicius C. Azevedo, and Barbara Solenthaler. 2019.Neural Smoke Stylization with Color Transfer. (dec 2019). arXiv:1912.08757 http://arxiv.org/abs/1912.08757Mengyu Chu and Nils Thuerey. 2017. Data-driven synthesis of smoke flows withCNN-based feature descriptors.
ACM Transactions on Graphics
36, 4 (jul 2017), 1–14.Mathieu Desbrun and Marie-Paule Gascuel. 1996. Smoothed Particles: A new paradigmfor animating highly deformable bodies. In
Eurographics Workshop on ComputerAnimation and Simulation . 61–76.Olga Diamanti, Connelly Barnes, Sylvain Paris, Eli Shechtman, and Olga Sorkine-Hornung. 2015. Synthesis of Complex Image Appearance from Limited Exemplars.
ACM Transactions on Graphics
34, 2 (mar 2015), 1–14.ACM Trans. Graph., Vol. 39, No. 4, Article 1. Publication date: July 2020. :10 • B. Kim, V. C. Azevedo, M. Gross, B. Solenthaler
Florian Ferstl, Ryoichi Ando, Chris Wojtan, Rüdiger Westermann, and Nils Thuerey.2016. Narrow Band FLIP for Liquid Simulations.
CGF
35, 2 (2016), 225–232.Jakub Fišer, OndÅŹej Jamriška, Michal Lukáč, Eli Shechtman, Paul Asente, Jingwan Lu,and Daniel Sýkora. 2016. StyLit: illumination-guided example-based stylization of3D renderings.
ACM ToG
35 (2016), 1–11. https://doi.org/10.1145/2897824.2925948Chuyuan Fu, Qi Guo, Theodore Gast, Chenfanfu Jiang, and Joseph Teran. 2017. Apolynomial particle-in-cell method.
ACM ToG
36, 6 (nov 2017), 1–12.Jonathan Gagnon, François Dagenais, and Eric Paquette. 2016. Dynamic lapped texturefor fluid simulations.
The Visual Computer
32, 6-8 (jun 2016), 901–909.Jonathan Gagnon, Julián E. Guzmán, Valentin Vervondel, François Dagenais, DavidMould, and Eric Paquette. 2019. Distribution Update of Deformable Patches forTexture Synthesis on the Free Surface of Fluids.
CGF
38, 7 (2019), 491–500.Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2015. A neural algorithm ofartistic style.
Nature Communications (2015).Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style TransferUsing Convolutional Neural Networks. In . 2414–2423.C Gissler, A Peer, S Band, J Bender, and M Teschner. 2019. Interlinked sph pressuresolvers for strong fluid-rigid coupling.
ACM ToG
38, 1 (2019), 5:1–5:13.Xiaowei He, Ning Liu, Sheng Li, Hongan Wang, and Guoping Wang. 2012. Local PoissonSPH for Viscous Incompressible Fluids.
CGF
31 (2012), 1948—-1958.Philipp Holl, Nils Thuerey, and Vladlen Koltun. 2020. Learning to Control PDEs withDifferentiable Physics. In
ICLR . https://openreview.net/forum?id=HyeSin4FPBYuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Fredo Durand. 2020. DiffTaichi: Differentiable Programming for PhysicalSimulation. In
ICLR . https://openreview.net/forum?id=B1eB5xSFvrYuanming Hu, Jiancheng Liu, Andrew Spielberg, Joshua B Tenenbaum, William TFreeman, Jiajun Wu, Daniela Rus, and Wojciech Matusik. 2019a. ChainQueen: Areal-time differentiable physical simulator for soft robotics. In
ICRA . 6265–6271.Yuanming Hu, Xinxin Zhang, Ming Gao, and Chenfanfu Jiang. 2019b. On hybridlagrangian-eulerian simulation methods: practical notes and high-performanceaspects. In
ACM SIGGRAPH 2019 Courses . 16.Markus Ihmsen, Nadir Akinci, Gizem Akinci, and Matthias Teschner. 2012. Unifiedspray, foam and air bubbles for particle-based fluids.
The Visual Computer
28, 6-8(2012), 669–677.Markus Ihmsen, Jens Cornelis, Barbara Solenthaler, Christopher Horvath, and MatthiasTeschner. 2014. Implicit incompressible SPH.
IEEE TVCG
20, 3 (2014), 426–436.Eldar Insafutdinov and Alexey Dosovitskiy. 2018. Unsupervised Learning of Shape andPose with Differentiable Point Clouds. In
NeurIPS .Ondřej Jamriška, Jakub Fišer, Paul Asente, Jingwan Lu, Eli Shechtman, and DanielSýkora. 2015. LazyFluids: appearance transfer for fluid animations.
ACM Transactionson Graphics (TOG)
34, 4 (2015), 92.Chenfanfu Jiang, Craig Schroeder, Andrew Selle, Joseph Teran, and Alexey Stomakhin.2015. The affine particle-in-cell method.
ACM ToG
34, 4 (jul 2015), 51:1–51:10.Chenfanfu Jiang, Craig Schroeder, Joseph Teran, Alexey Stomakhin, and Andrew Selle.2016. The material point method for simulating continuum materials. In
ACMSIGGRAPH 2016 Courses . 1–52.Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song.2019. Neural style transfer: A review.
IEEE TVCG (2019).Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3d mesh renderer.In
Proceedings of the IEEE Conference on CVPR . 3907–3916.Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, and Barbara Solenthaler. 2019a.Transport-based neural style transfer for smoke simulations.
ACM Transactions onGraphics
38, 6 (nov 2019), 1–11. https://doi.org/10.1145/3355089.3356560Byungsoo Kim, Vinicius C. Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, andBarbara Solenthaler. 2019b. Deep Fluids: A Generative Network for ParameterizedFluid Simulations.
Computer Graphics Forum
38, 2 (2019).Janghee Kim, Deukhyun Cha, Byungjoon Chang, Bonki Koo, and Insung Ihm. 2006.Practical Animation of Turbulent Splashing Water. In
Proceedings SCA’07 . 335–344.Theodore Kim, Jerry Tessendorf, and Nils Thuerey. 2013. Closest point turbulence forliquid surfaces.
ACM Transactions on Graphics (TOG)
32, 2 (2013), 15.Theodore Kim, Nils Thürey, Doug James, and Markus Gross. 2008. Wavelet turbulencefor fluid simulation. In
ACM Transactions on Graphics (TOG) , Vol. 27. ACM, 50.D Koschier and J Bender. 2017. Density maps for improved sph boundary handling. In
ACM SIGGRAPH/Eurographics Symposium on Computer Animation . 1–10.Dan Koschier, Jan Bender, Barbara Solenthaler, and Matthias Teschner. 2019. SmoothedParticle Hydrodynamics Techniques for the Physics Based Simulation of Fluids andSolids. In
Eurographics 2019 - Tutorials .Vivek Kwatra, David Adalsteinsson, Nipun Kwatra, Mark Carlson, and Ming C. Lin.2006. Texturing fluids. In
ACM SIGGRAPH ’06 Sketches on . 63.Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. 2005. Texture optimizationfor example-based synthesis. In
ACM SIGGRAPH ’05 . 795.L’ubor Ladický, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross.2015. Data-driven fluid simulations using regression forests.
ACM Transactions onGraphics
34, 6 (oct 2015), 1–9. https://doi.org/10.1145/2816795.2818129Hsueh-Ti Derek Liu and Alec Jacobson. 2019. Cubic Stylization.
ACM ToG (2019). Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. 2018. Paparazzi: Surface Editingby way of Multi-View Image Processing.
ACM Transactions on Graphics (2018).Matthew M. Loper and Michael J. Black. 2014. OpenDR: An Approximate DifferentiableRenderer. 154–169. https://doi.org/10.1007/978-3-319-10584-0_11F. Losasso, J.O. Talton, N. Kwatra, and R. Fedkiw. 2008. Two-Way Coupled SPH andParticle Level Set Fluid Simulation.
IEEE TVCG
14, 4 (jul 2008), 797–804.Miles Macklin and Matthias Mueller. 2013. Position Based Fluids.
ACM Transactions onGraphics
32, 4 (2013), 104:1–104:12.Antoine McNamara, Adrien Treuille, Zoran Popović, and Jos Stam. 2004. Fluid controlusing the adjoint method. In
ACM SIGGRAPH ’04 . 449.J J Monaghan. 2005. Smoothed Particle Hydrodynamics.
Reports on Progress in Physics
68, 8 (2005), 1703–1759.Matthias Müller, David Charypar, and Markus Gross. 2003. Particle-Based Fluid Simu-lation for Interactive Applications. In
Symposium on Computer Animation .Rahul Narain, Vivek Kwatra, Huai-Ping Lee, Theodore Kim, Mark Carlson, and Ming CLin. 2007. Feature-guided Dynamic Texture Synthesis on Continuous Flows. In
Proceedings of the 18th EGSR . 361–370.Rahul Narain, Jason Sewall, Mark Carlson, and Ming C. Lin. 2008. Fast animation ofturbulence using energy transport and procedural synthesis.
ACM Transactions onGraphics
27, 5 (dec 2008), 1. https://doi.org/10.1145/1409060.1409119Michael B. Nielsen and Robert Bridson. 2011. Guide shapes for high resolution natural-istic liquid simulation. In
ACM SIGGRAPH ’11 . 1.Zherong Pan, Jin Huang, Yiying Tong, Changxi Zheng, and Hujun Bao. 2013. Interactivelocalized liquid motion editing.
ACM ToG
32, 6 (nov 2013), 1–10.Zherong Pan and Dinesh Manocha. 2017. Efficient Solver for Spacetime Control ofSmoke.
ACM Trans. Graph.
36, 4, Article Article 68a (July 2017), 13 pages.A. Peer, M. Ihmsen, J. Cornelis, and M. Teschner. 2015. An Implicit Viscosity Formulationfor SPH Fluids.
ACM Transactions on Graphics
34, 4 (2015), 1–10.Tobias Pfaff, Nils Thuerey, Andrew Selle, and Markus Gross. 2009. Synthetic turbulenceusing artificial boundary layers.
ACM Transactions on Graphics
28, 5 (dec 2009), 1.Karthik Raveendran, Nils Thuerey, Chris Wojtan, and Greg Turk. 2012. ControllingLiquids Using Meshes. In
Proceedings of the SCA . 255–264.Stefan Reinhardt, Tim Krake, Bernhard Eberhardt, and Daniel Weiskopf. 2019. Consis-tent Shepard Interpolation for SPH-Based Fluid Animation.
ACM ToG
38 (2019).Bo Ren, Chenfeng Li, Xiao Yan, Ming C Lin, Javier Bonet, and Shi-Min Hu. 2014.Multiple-Fluid SPH Simulation Using a Mixture Model.
ACM ToG
33, 5 (2014), 1–11.Syuhei Sato, Yoshinori Dobashi, Theodore Kim, and Tomoyuki Nishita. 2018. Example-based turbulence style transfer.
ACM Trans. Graph.
37, 4 (2018), 84.Hagit Schechter and Robert Bridson. 2008. Evolving Sub-Grid Turbulence for SmokeAnimation. 1–7. https://doi.org/10.2312/SCA/SCA08/001-007Connor Schenck and Dieter Fox. 2018. SPNets: Differentiable Fluid Dynamics for DeepNeural Networks. In
Conference on Robot Learning . 317–335.M. Servin, K. Bodin, and C. Lacoursiere. 2012. Constraint Fluids.
IEEE TVCG
18, 03(mar 2012), 516–526. https://doi.org/10.1109/TVCG.2011.29Barbara Solenthaler and Renato Pajarola. 2009. Predictive-corrective incompressibleSPH.
ACM Trans. Graph.
28, 3 (2009), 40:1–40:6.Alexey Stomakhin, Craig Schroeder, Lawrence Chai, Joseph Teran, and Andrew Selle.2013. A Material Point Method for Snow Simulation.
ACM ToG
32, 4 (2013).Itamar Talmi, Roey Mechrez, and Lihi Zelnik-Manor. 2017. Template matching withdeformable diversity similarity. In
Proceedings of the IEEE CVPR . 175–183.Nils Thuerey. 2016. Interpolations of Smoke and Liquid Simulations.
ACM Transactionson Graphics
36, 1 (sep 2016), 1–16. https://doi.org/10.1145/2956233Nils Thuerey and Tobias Pfaff. 2018. MantaFlow. http://mantaflow.com .Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2017.Accelerating eulerian fluid simulation with convolutional networks. In
Proceedingsof the 34th ICML-Volume 70 . JMLR. org, 3424–3433.Adrien Treuille, Antoine McNamara, Zoran Popović, and Jos Stam. 2003. Keyframecontrol of smoke simulations.
ACM Transactions on Graphics
22, 3 (jul 2003), 716.Kiwon Um, Seungho Baek, and JungHyun Han. 2014. Advanced Hybrid Particle-GridMethod with Sub-Grid Particle Correction.
CGF
33, 7 (oct 2014), 209–218.Steffen Wiewel, Moritz Becher, and Nils Thuerey. 2019. Latent space physics: Towardslearning the temporal evolution of fluid flow. In
CGF , Vol. 38. 71–82.You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. 2018. tempogan: A temporallycoherent, volumetric gan for super-resolution fluid flow.
ACM ToG
37, 4 (2018).Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. 2016. Perspec-tive transformer nets: Learning single-view 3d object reconstruction without 3dsupervision. In
Advances in NIPS . 1696–1704.Cheng Yang, Xubo Yang, and Xiangyun Xiao. 2016. Data-driven projection method influid simulation.
CAVW
27, 3-4 (may 2016), 415–424.Wang Yifan, Felice Serena, Shihao Wu, Cengiz Öztireli, and Olga Sorkine-Hornung.2019. Differentiable Surface Splatting for Point-based Geometry Processing. (jun2019). https://doi.org/10.1145/3355089.3356513 arXiv:1906.04173Q. Yu, F. Neyret, E. Bruneton, and N. Holzschuch. 2011. Lagrangian Texture Advection:Preserving both Spectrum and Velocity Field.
IEEE TVCG
17, 11 (nov 2011).Yongning Zhu and Robert Bridson. 2005. Animating sand as a fluid.