A study on a feedforward neural network to solve partial differential equations in hyperbolic-transport problems
AA study on a feedforward neural network tosolve partial differential equations inhyperbolic-transport problems (cid:63)
Eduardo Abreu − − − and Joao B. Florindo − − − Institute of Mathematics, Statistics and Scientific Computing - University ofCampinas, Rua S´ergio Buarque de Holanda, 651, Cidade Universit´aria ”Zeferino Vaz”- Distr. Bar˜ao Geraldo, CEP 13083-859, Campinas, SP, Brasil { eabreu,florindo } @unicamp.br Abstract.
In this work we present an application of modern deep lear-ning methodologies to the numerical solution of partial differential equa-tions in transport models. More specifically, we employ a supervised deepneural network that takes into account the equation and initial con-ditions of the model. We apply it to the Riemann problems over theinviscid nonlinear Burger’s equation, whose solutions might develop dis-continuity (shock wave) and rarefaction, as well as to the classical one-dimensional Buckley-Leverett two-phase problem. The Buckley-Leverettcase is slightly more complex and interesting because it has a non-convexflux function with one inflection point. Our results suggest that a rela-tively simple deep learning model was capable of achieving promisingresults in such challenging tasks, providing numerical approximation ofentropy solutions with very good precision and consistent to classical aswell as to recently novel numerical methods in these particular scenarios.
Keywords:
Neural networks · Partial differential equation · Transportmodels · Numerical approximation methods for PDEs · Approximationof entropy solutions.
Deep learning techniques have been applied to a variety of problems in scienceduring the last years, with numerous examples in image recognition [12], naturallanguage processing [24], self driving cars [7], virtual assistants [14], healthcare[16], and many others. More recently, we have seen a growing interest on apply-ing those techniques to the most challenging problems in mathematics and thesolution of differential equations, especially partial differential equations (PDE),is a canonical example of such task [19].Despite the success of recent learning-based approaches to solve PDEs in rela-tively “well-behaved” configurations, we still have points in these methodologies (cid:63)
Supported by S˜ao Paulo Research Foundation (FAPESP), National Council for Sci-entific and Technological Development, Brazil (CNPq), and PETROBRAS - Brazil. a r X i v : . [ m a t h . NA ] F e b Abreu and Florindo and applications that deserve more profound discussion, both in theoretical andpractical terms. One of such points is that many of these models are based oncomplex structures of neural networks, sometimes comprising a large number oflayers, recurrences, and other “ad-hoc” mechanisms that make them difficult tobe trained and interpreted. Furthermore, we have seen little discussion on morechallenging problems, like those involving discontinuities and “shock” solutionsnumerical approximation of entropy solutions in hyperbolic-transport problems,see, e.g., [3,8,9,13,17,23,2,10] and references cited therein.This is the motivation for the study accomplished in this work, where weinvestigate a simple feed-forward architecture, based on the physics-informedmodel proposed in [19], to complex problems involving PDEs in transport mo-dels. More specifically, we analyze the numerical solutions of four initial-valueproblems: three problems on the inviscid nonlinear Burgers PDE (involving shockwave and smooth/rarefaction fan for distinct initial conditions) and on the one-dimensional Buckley-Leverett equation for two-phase configuration, which is alitle rather more complex and interesting because it has a non-convex flux func-tion with one inflection point. The neural network consists of 9 stacked layerswith tanh activation and geared towards minimizing the approximation errorboth for the initial values and for values of the PDE functional calculated byautomatic differentiation.The achieved results are promising. We managed to obtain an average quadra-tic error of 0.0005, 0.0018, 0.0001, and 0.0021, respectively, for the rarefaction,shock, smooth, and Buckley-Leverett problems. Such results are pretty inte-resting if we consider the low complexity of the neural model and the challengeinvolved in these discontinuous cases. It also strongly suggests more in-depthstudies on deep learning models that account for the underlying equation. Theyseem to be a quite promising line to be explored for challenging problems arisingin physics, engineering, and many other areas.
Hyperbolic partial differential equations in transport models describe a widerange of wave-propagation and transport phenomena arising from scientific andindustrial engineering area. This is a fundamental research that is in activeprogress since it involves complex multiphysics and advanced simulations dueto a lack of general mathematical theory for closed-analytical solutions (see,e.g., [13,15,5,3,8,23,9] and references cited therein). A basic fact of nonlinear hy-perbolic transport problems is the possible loss of regularity in their solutions,namely, even solutions which are initially smooth (i.e., initial datum) may be-come discontinuous within finite time (blow up in finite time) and after thissingular time, nonlinear interaction of shocks and rarefaction waves will play animportant role in the dynamics. For the sake of simplicity and without loss ofgenerality, we consider the scalar 1D Cauchy problem ∂u∂t + ∂H ( u ) ∂x = 0 , x ∈ R , t > , u ( x,
0) = u ( x ) , (1) study on a feedforward neural network to solve PDEs 3 here H ∈ C ( Ω ), H : Ω → R , u ( x ) ∈ L ∞ ( R ) and u = u ( x, t ) : R × R + −→ Ω ⊂ R . In several applications, the flux function H ( u ) is smooth and with afinite number of inflection points, namely H ( u ) = u u + a (1 − u ) , 0 < a <
1, suchas is the case of classical Buckley-Leverett equation (it has a non-convex fluxfunction with one inflection point) in petroleum engineering (e.g., [10]). Anotherinteresting model is the inviscid Burgers’ equation where H ( u ) = u /
2, whichis used to model for example gas dynamics and traffic flow, and investigate theappearance of shock waves, especially in fluid mechanics, for nonlinear wavepropagation (see, e.g., [2]).By using an argument in terms of traveling waves to capture the viscousprofile at shocks, one can conclude that solutions of (1) satisfy Oleinik’s entropycondition (see e.g., [17]), which are limits of solutions u (cid:15) ( x, t ) → u ( x, t ), where u ( x, t ) is given by (1) and u (cid:15) ( x, t ) is given by the augmented parabolic equation[18] ∂u (cid:15) ∂t + ∂H ( u (cid:15) ) ∂x = (cid:15) ∂ u (cid:15) ∂x , x ∈ R , t > , u (cid:15) ( x,
0) = u (cid:15) ( x ) , (2)with (cid:15) > H ( u ) associated to fundamental prototypemodels (1) and (2) depends on the application under consideration, for instance,such as modeling slow erosion phenomena in granular flow, fluid mechanics, flowin porous media (see, e.g., [2,10,3,13] and references cited therein). Moreover, itis noteworthy that in practice calibration of function H ( u ) can be difficult toachieve due to unknown parameters and, thus, data assimilation (i.e., regressionmethod) can be an efficient method of calibrating these flux function H ( u ) mo-dels [4,22]. We intend to design a unified approach which combines both PartialDifferential Equation (PDE) modeling and fine tuning machine learning tech-niques aiming as a fisrt step to an effective tool for advanced simulations relatedto hyperbolic problems in transport models. From the above discussion, and for comparison purposes, we will provide correctqualitative entropy approximation solutions for model problem (1) by using twonumerical schemes, namely, the conservative method U n +1 j = U nj − kh (cid:104) F ( U nj , U nj +1 ) − F ( U nj − , U nj ) (cid:105) , (3)with the associated classical Lax-Friedrichs numerical flux found elsewhere, F ( U nj , U nj +1 ) = 12 (cid:20) hk ( U nj − U nj +1 ) + (cid:16) H ( U nj +1 ) + H ( U nj ) (cid:17)(cid:21) , (4) Abreu and Florindo as well as by the the Lagrangian-Eulerian numerical flux ([2,1]) F ( U nj , U nj +1 ) = 14 (cid:20) hk ( U nj − U nj +1 ) + 2 (cid:16) H ( U nj +1 ) + H ( U nj ) (cid:17)(cid:21) . (5)Here, both schemes (4) and (5) should follow the stability Courant–Friedrichs–Lewy (CFL) condition max j { | H (cid:48) ( U nj ) | } kh < , (6)for all time steps n , where k = ∆t n and h = ∆x , H (cid:48) ( U nj ) is the partial derivativeof H , namely ∂H ( u ) ∂u for all U nj in the mesh grid. We are interested in the study of a unified approach which combines bothdata-driven models (regression method by machine learning) and physics-basedmodels (PDE modeling). Our approach is substantially distinct from the cur-rent trend of merely data-driven discovery type methods for recovery governingequations by using machine learning and artificial intelligence algorithms in astraightforward manner. We glimpse the use of novel methods, fine tuning ma-chine learning algorithms and very fine mathematical and numerical analysis toimprove comprehension of regression methods aiming to identify the potentialand reliable prediction for advanced simulation for hyperbolic problems in trans-port models as well as the estimation of financial returns and economic benefits.In this regard, we mention the very interesting works [20,11,4,22,19], which in-troduced physics-informed neural networks that are trained to solve supervisedlearning tasks while respecting conceptual foundations of physics as well as prov-ing data assimilation for summarizing data-driven discovery of partial differentialequations. Related to the hyperbolic problems in transport models (1) and (2),we mention the very recent review paper [6], which discusses machine learningfor fluid mechanics, but highlighting that such approach could augment existingefforts for the study, modeling and control of fluid mechanics, keeping in mindthe importance of honoring engineering principles and the governing equations,mathematical and physical foundations driven by unprecedented volumes of datafrom experiments and advanced simulations at multiple spatiotemporal scales.We also mention the work [21], where the issue of domain knowledge is addressedas a prerequisite essential to gain explainability to enhance scientific consistencyfrom machine learning and foundations of physics-based given in terms of mathe-matical equations and physical laws. However, we have seen much less discussionon more challenging PDE modeling problems, like those involving discontinuitiesand shock’ solutions numerical approximation of entropy solutions in hyperbolic-transport problems, in which the issue of conservative numerical approximationof entropy solutions is crucial and mandatory [3,8,9,13,17,23,2,10,1]. study on a feedforward neural network to solve PDEs 5
The neural network employed here is based on that described in [19]. It fol-lows a classical feed-forward architecture, with 9 hidden layers, each one with ahyperbolic tangent used as activation function.The general problem solved here has the form u t + N ( u ) = 0 , x ∈ Ω, t ∈ [0 , T ] , (7)where N ( · ) is a non-linear operator and u ( x, t ) is the desired solution. Unlikethe methodology described in [19], here we do not have an explicit boundarycondition and the neural network is optimized only over the initial conditions ofeach problem.We focus on four problems: the inviscid nonlinear Burgers equation u t + (cid:18) u (cid:19) x = 0 , x ∈ [ − , , t ∈ [0 , , (8)with shock initial condition u ( x,
0) = 1 , x < u ( x,
0) = 0 , x > , (9)discontinuous initial data (hereafter rarefaction fan initial condition ) u ( x,
0) = − , x < u ( x,
0) = 1 , x > , (10)smooth initial condition u ( x,
0) = 0 . x ) , (11)and the two-phase Buckley-Leverett u t + (cid:18) u u + a (1 − u ) (cid:19) x = 0 , x ∈ [ − , , t ∈ [0 , ,u ( x,
0) = 1 , x < u ( x,
0) = 0 , x > . (12)In this problem we take a = 1.For the optimization of the neural network we should define f as the lefthand side of each PDE, i.e., f := u t + N ( u ) , (13)such that N ( u ) = (cid:18) u (cid:19) x (14)in the inviscid Burgers and N ( u ) = (cid:18) u u + a (1 − u ) (cid:19) x (15) Abreu and Florindo in the Buckley-Leverett. Here we also have an important novelty which is theintroduction of a derivative (w.r.t. x ) in N ( u ), which was not present in [19].The function f is responsible for capturing the physical structure (i.e, selectthe qualitatively correct entropy solution) of the problem and inputting thatstructure as a primary element of the machine learning problem. The neuralnetwork computes the expected solution u ( x, t ) and its output and the derivativespresent in the calculus of f are obtained by automatic differentiation.Two quadratic loss functions are defined over f , u and the initial condition: L f ( u ) = 1 N f N f (cid:88) i =1 | f ( x if , t if , ) | , L u ( u ) = 1 N u N u (cid:88) i =1 | u ( x iu , t iu ) − u i | , (16)where { x if , t if } N f i =1 correspond to collocation points over f , whereas { x iu , t iu , u i } N u i =1 correspond to the initial values at pre-defined points.Finally, the solution u ( x, t ) is approximated by minimizing the sum of bothobjective functions at the same time, i.e., u ( x, t ) ≈ arg min u [ L f ( u ) + L u ( u )] . (17) In the following we present results for the solutions of the investigated problemsobtained by the neural network model. We compare these solutions with two nu-merical schemes: Lagrangian-Eulerian and Lax-Friedrichs. These are very robustnumerical methods with a solid mathematical basis. Here we use one scheme tovalidate the other. In fact, the solutions obtained by each scheme are very similar.For that reason, we opted for graphically showing curves only for the Lagrangian-Eulerian solution. However, we exhibit the errors of the proposed methodologyboth in comparison with Lagrangian-Eulerian (EEL) and Lax-Friedrichs (ELF).Here such error corresponds to the average quadratic error, i.e.,
ELF ( t ) = (cid:80) N u i =1 ( u NN ( x i , t ) − u LF ( x i , t )) N u ,EEL ( t ) = (cid:80) N u i =1 ( u NN ( x i , t ) − u LE ( x i , t )) N u , (18)where u NN , u LF , and u LE correspond, respectively, to the neural network, Lax-Friedrichs, and Lagrangian-Eulerian solutions. In our tests, we used N f = 10 unless otherwise stated, and N u = 100. For the numerical reference schemes weadopted CFL condition 0.4 for Lax-Friedrichs and 0.2 for Lagrangian-Eulerian.We also used ∆x = 0 . study on a feedforward neural network to solve PDEs 7 For the rarefaction case, we observed that using N f = 10 collocation pointswas sufficient to provide good results. In this scenario, we also verified the num-ber of neurons, testing 40 and 60 neurons. Figure 1 shows the obtained solutioncompared with reference and the respective errors. Interestingly, the error de-creases when time increases, which is a consequence of the solution behavior,which becomes smoother (smaller slope) for larger times, showing good accu-racy and evidence that we are computing the correct solution in our numericalsimulation. -4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) t=0.5 ELF = 0.0034EEL = 0.0034 n e u r on s Reference Neural Network -4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) t=1.5 ELF = 0.0014EEL = 0.0014 -4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) t=3.5 ELF = 0.0011EEL = 0.0011-4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) ELF = 0.0021EEL = 0.0021 n e u r on s -4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) ELF = 0.0010EEL = 0.0010 -4 -2 0 2 4 x -1-0.500.51 u ( x ,t ) ELF = 0.0005EEL = 0.0005
Fig. 1.
Burgers: Rarefaction.
Figure 2 illustrates the performance of the neural network model for theinviscid Burgers equation with shock initial condition. Here we had to add asmall viscous term (0 . u xx ) for better stabilization, but in view on the modelingproblems (1) and (2). Here, such underlying viscous mechanism did not bringsignificant reduction in error, but the general structure of the obtained solution isbetter, attenuating spurious fluctuations around the discontinuities. It is crucialto mention at this point that numerical approximation of entropy solutions (withrespect to the neural network) to hyperbolic-transport problems also require thenotion of entropy-satisfying weak solution. It was also interesting to see thatthe addition of more neurons did not reduce the error for this initial condition.This is a typical example of overfitting caused by over-parameterization. Anexplanation for that is the relative simplicity of the initial condition, assumingonly two possible values.Figure 3 depicts the solutions for the smooth initial condition in the invis-cid Burgers equation. Here, unlike the previous case, increasing the number ofneurons actually reduced the error. And this was expected considering that nowboth initial condition and solution are more complex. Nevertheless, we identifiedthat tuning only number of neurons was not enough to achieve satisfactory solu-tion in this situation. Therefore we also tuned the parameter N f . In particular,we discovered that combining the same small viscous term used for the shock Abreu and Florindo x u ( x ,t ) t=0.5 ELF = 0.0061EEL = 0.0061 n e u r on s Reference Neural Network x u ( x ,t ) t=1.5 ELF = 0.0014EEL = 0.0014 0 2 4 6 8 x u ( x ,t ) t=3.5 ELF = 0.0142EEL = 0.01410 2 4 6 8 x u ( x ,t ) ELF = 0.0057EEL = 0.0057 n e u r on s + . u xx x u ( x ,t ) ELF = 0.0058EEL = 0.0058 0 2 4 6 8 x u ( x ,t ) ELF = 0.0060EEL = 0.00600 2 4 6 8 x u ( x ,t ) ELF = 0.0063EEL = 0.0062 n e u r on s x u ( x ,t ) ELF = 0.0018EEL = 0.0018 0 2 4 6 8 x u ( x ,t ) ELF = 0.0043EEL = 0.00430 2 4 6 8 x u ( x ,t ) ELF = 0.0058EEL = 0.0058 n e u r on s + . u xx x u ( x ,t ) ELF = 0.0059EEL = 0.0059 0 2 4 6 8 x u ( x ,t ) ELF = 0.0063EEL = 0.0063
Fig. 2.
Burgers: Shock. case with N f = 10 provided excellent results, with quite promising precision incomparison with our reference solutions. x -0.500.511.5 u ( x ,t ) t=0.5 ELF = 0.0024EEL = 0.0024 n e u r on s Reference Neural Network x -0.500.511.5 u ( x ,t ) t=1.5 ELF = 0.0120EEL = 0.0120 0 2 4 6 8 x -0.500.511.5 u ( x ,t ) t=3.5 ELF = 0.0264EEL = 0.02640 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0024EEL = 0.0024 n e u r on s x -0.500.511.5 u ( x ,t ) ELF = 0.0021EEL = 0.0021 0 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0347EEL = 0.03490 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0690EEL = 0.0689 n e u r on s N f = x -0.500.511.5 u ( x ,t ) ELF = 0.0902EEL = 0.0899 0 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0281EEL = 0.02800 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0000EEL = 0.0000 n e u r on s N f = + . u xx x -0.500.511.5 u ( x ,t ) ELF = 0.0000EEL = 0.0001 0 2 4 6 8 x -0.500.511.5 u ( x ,t ) ELF = 0.0000EEL = 0.0001
Fig. 3.
Burgers: Smooth.
Another case characterized by solutions with more complex behavior is Buckley-Leverett with shock initial condition (Figure 4). And, similar to what happenedin the smooth case, again, the combination of N f = 10 with the small viscousterm was more effective than any increase in the number of neurons. While the study on a feedforward neural network to solve PDEs 9 introduction of the small viscous term attenuated fluctuations in the solutionwhen using 40 neurons, at the same time when using N f = 10 , we observe thatincreasing the number of neurons causes an increase in the delay between thesolution provided by the network and the reference. x u ( x ,t ) t=0.5 ELF = 0.0162EEL = 0.0161 n e u r on s Reference Neural Network x u ( x ,t ) t=1.5 ELF = 0.0491EEL = 0.0489 0 2 4 6 8 x u ( x ,t ) t=3.5 ELF = 0.1040EEL = 0.10380 2 4 6 8 x u ( x ,t ) ELF = 0.0120EEL = 0.0119 n e u r on s + . u xx x u ( x ,t ) ELF = 0.0243EEL = 0.0241 0 2 4 6 8 x u ( x ,t ) ELF = 0.0584EEL = 0.05820 2 4 6 8 x u ( x ,t ) ELF = 0.0155EEL = 0.0154 n e u r on s + . u xx x u ( x ,t ) ELF = 0.0363EEL = 0.0361 0 2 4 6 8 x u ( x ,t ) ELF = 0.0811EEL = 0.08090 2 4 6 8 x u ( x ,t ) ELF = 0.0026EEL = 0.0024 n e u r on s N f = + . u xx x u ( x ,t ) ELF = 0.0023EEL = 0.0021 0 2 4 6 8 x u ( x ,t ) ELF = 0.0023EEL = 0.0021
Fig. 4.
Buckley-Leverett: Rarefaction + Shock.
Generally speaking, the neural networks here studied were capable of achie-ving promising results in challenging situations, involving different types of dis-continuities and nonlinearities. Moreover, our results also bring important the-oretical implications. In particular, the neural networks obtained results prettyclose to those provided by entropic numerical schemes like Lagrangian-Eulerianand Lax-Friedrichs. Going beyond the analysis in terms of raw precision, theseresults give us evidences that our neural network model possess some type ofentropic property, which from the viewpoint of a numerical method is a funda-mental and desirable characteristic.
This work presented an application of a feed-forward neural network to solvechallenging hyperbolic problems in transport models. More specifically, we solvethe inviscid Burgers equation with shock, smooth and rarefaction initial condi-tions, as well as the Buckley-Leverett equation with classical Riemann datum,which lead to the well-known solution that comprises (from left to right) a rare-faction and a (discontinuous) shock wave. Our network was tuned according toeach problem and interesting findings were observed. At first, our neural net-work model was capable of providing solutions pretty similar to those obtained by two numerical schemes here used as references: Lagrangian-Eulerian and Lax-Friedrichs. Besides, the general structure of the obtained solutions also behavedas expected, which considering the intrinsic challenge of these problems is aremarkable achievement. In fact, the investigated neural networks showed evi-dences of an entropic property, which is an important attribute of a numericalscheme, especially in problems like those investigated here.In summary, the obtained results share both practical and theoretical im-plications. In practical terms, the results confirm the potential of a relativelysimple deep learning model in the solution of an intricate numerical problem. Intheoretical terms, this also opens an avenue for formal as well as rigorous studieson these networks as mathematically valid and effective numerical methods.
Acknowledgements
J. B. Florindo gratefully acknowledges the financial support of S˜ao Paulo Re-search Foundation (FAPESP) (Grant
References
1. Abreu, E., Matos, V., P´erez, J., Rodr´ıguez-Berm´udez, P.: A class of lagrangian-eulerian shock-capturing schemes for first-order hyperbolic problems with forcingterms. Journal of Scientific Computing (to appear) (2021)2. Abreu, E., P´erez, J.: A fast, robust, and simple lagrangian–eulerian solver forbalance laws and applications. Computers & Mathematics with Applications (9),2310 – 2336 (2019)3. Alibaud, N., Andreianov, B., Ou´edraogo, A.: Nonlocal dissipation measure and l kinetic theory for fractional conservation laws. Communications in Partial Differ-ential Equations (9), 1213–1251 (2020)4. Berg, J., Nystr¨om, K.: Data-driven discovery of pdes in complex datasets. Journalof Computational Physics , 239 – 252 (2019)5. Bressan, A., Chiri, M.T., Shen, W.: A posteriori error estimates for numericalsolutions to hyperbolic conservation laws (2020), Available at https://arxiv.org/abs/2010.00428
6. Brunton, S.L., Noack, B.R., Koumoutsakos, P.: Machine learning for fluid mechan-ics. Annual Review of Fluid Mechanics (1), 477–508 (2020)7. Chen, C., Seff, A., Kornhauser, A., Xiao, J.: DeepDriving: Learning Affordance forDirect Perception in Autonomous Driving. In: 2015 IEEE International Conferenceon Computer Vision (ICCV). pp. 2722–2730. IEEE International Conference onComputer Vision (2015) study on a feedforward neural network to solve PDEs 118. Chen, G.Q.G., Glimm, J.: Kolmogorov-type theory of compressible turbulence andinviscid limit of the navier–stokes equations in r3. Physica D: Nonlinear Phenomena , 132138 (2019)9. Dafermos, C.M.: Hyperbolic conservation laws in continuous physics. Springer(2016)10. Galvis, J., Abreu, E., D´ıaz, C., P´erez, J.: On the conservation properties in multiplescale coupling and simulation for darcy flow with hyperbolic-transport in complexflows. Multiscale Modeling and Simulation (4), 1375–1408 (2020)11. Grossi, M.D., Kubat, M., ¨Ozg¨okmen, T.M.: Predicting particle trajectories inoceanic flows using artificial neural networks. Ocean Modelling , 101707 (2020)12. He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition.In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).pp. 770–778. IEEE Conference on Computer Vision and Pattern Recognition (2016)13. Hoel, H., Karlsen, K.H., Risebro, N.H., Storrosten, E.B.: Numerical methods forconservation laws with rough flux. Stochastics and Partial Differential Equations-Analysis and Computations (1), 186–261 (2020)14. Kepuska, V., Bohouta, G.: Next-Generation of Virtual Personal Assistants (Mi-crosoft Cortana, Apple Siri, Amazon Alexa and Google Home). In: 2018 IEEE 8THAnnual Computing and Communication Workshop and Conference (CCWC). pp.99–103 (2018)15. Lellis, C.D., Kwon, H.: On non-uniqueness of h¨older continuous globally dissipativeeuler flows (2020), Available at https://arxiv.org/abs/2006.06482
16. Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M.,van der Laak, J.A.W.M., van Ginneken, B., Sanchez, C.I.: A survey on deep learn-ing in medical image analysis. Medical Image Analysis , 60–88 (2017)17. Oleinik, O.A.: Discontinuous solutions of nonlinear differential equations. UspekhiMatematicheskikh Nauk, Vol. 12, 1957, pp. 3-73. (English translation Transactionsof the American Mathematical Society (2), 95–172 (1963)18. QUINN, B.: Solutions with shocks: an example of an L –contraction semigroup.Communications on Pure and Applied Mathematics (2), 125–132 (1971)19. Raissi, M., Perdikaris, P., Karniadakis, G.: Physics-informed neural networks: Adeep learning framework for solving forward and inverse problems involving non-linear partial differential equations. Journal of Computational Physics , 686 –707 (2019)20. Regazzoni, F., Ded`e, L., Quarteroni, A.: Machine learning of multiscale activeforce generation models for the efficient simulation of cardiac electromechanics.Computer Methods in Applied Mechanics and Engineering , 113268 (2020)21. Roscher, R., Bohn, B., Duarte, M.F., Garcke, J.: Explainable machine learn-ing for scientific insights and discoveries. IEEE Access , 42200–42216 (2020).https://doi.org/10.1109/ACCESS.2020.297619922. Rudy, S.H., Brunton, S.L., Proctor, J.L., Kutz, J.N.: Data-driven discovery ofpartial differential equations. Science Advances (4) (2017)23. Serre, D., Silvestre, L.: Multi-dimensional burgers equation with unbounded initialdata: Well-posedness and dispersive estimates. Archive for Rational Mechanics andAnalysis , 1391–1411 (2019)24. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent Trends in Deep LearningBased Natural Language Processing. IEEE Computational Intelligence Magazine13