Global field reconstruction from sparse sensors with Voronoi tessellation-assisted deep learning
Kai Fukami, Romit Maulik, Nesar Ramachandra, Koji Fukagata, Kunihiko Taira
GG LOBAL FIELD RECONSTRUCTION FROM SPARSE SENSORS WITH V ORONOI TESSELLATION - ASSISTED DEEP LEARNING
A P
REPRINT
Kai Fukami , , Romit Maulik , Nesar Ramachandra , Koji Fukagata , Kunihiko Taira
1. Mechanical and Aerospace Engineering, University of California, Los Angeles2. Mechanical Engineering, Keio University3. Argonne Leadership Computing Facility, Argonne National Laboratory4. High Energy Physics Division, Argonne National LaboratoryJanuary 5, 2021 A BSTRACT
Achieving accurate and robust global situational awareness of a complex time-evolving field froma limited number of sensors has been a longstanding challenge. This reconstruction problem isespecially difficult when sensors are sparsely positioned in a seemingly random or unorganizedmanner, which is often encountered in a range of scientific and engineering problems. Moreover,these sensors can be in motion and can become online or offline over time. The key leverage inaddressing this scientific issue is the wealth of data accumulated from the sensors. As a solutionto this problem, we propose a data-driven spatial field recovery technique founded on a structuredgrid-based deep-learning approach for arbitrary positioned sensors of any numbers. It should benoted that the naïve use of machine learning becomes prohibitively expensive for global fieldreconstruction and is furthermore not adaptable to an arbitrary number of sensors. In the present work,we consider the use of Voronoi tessellation to obtain a structured-grid representation from sensorlocations enabling the computationally tractable use of convolutional neural networks. One of thecentral features of the present method is its compatibility with deep-learning based super-resolutionreconstruction techniques for structured sensor data that are established for image processing. Theproposed reconstruction technique is demonstrated for unsteady wake flow, geophysical data, andthree-dimensional turbulence. The current framework is able to handle an arbitrary number ofmoving sensors, and thereby overcomes a major limitation with existing reconstruction methods. Thepresented technique opens a new pathway towards the practical use of neural networks for real-timeglobal field estimation.
Spatial field reconstruction from limited local sensor information is a major challenge in the analysis, estimation, control,and design of high-dimensional complex physical systems. For complex physics including geophysics [1], astrophysics[2], atmospheric science [3, 4], and fluid dynamics [5], traditional linear theory-based tools have faced challenges inreconstructing global fields from a limited number of sensors. Neural networks have emerged as hopeful nonlinearalternatives to reconstruct chaotic data from sparse measurements in an efficient manner [6, 7]. However, there are keylimitations associated with neural networks for field reconstruction. One of the biggest difficulties is the applicability ofneural network-based methods to unstructured grid data. Almost all practical experimental measurements or numericalsimulations rely on unstructured grids or non-uniform/random sensor placements. These grids are not compatible withconvolutional neural network (CNN)-based methods which are founded on training data being structured and uniformly arranged [8, 9]. While a multi-layer perceptron (MLP)[10] can handle unstructured data, its use is sometimes impracticaldue to their globally connected network structure. Moreover, MLPs cannot handle sensors that may go offline or movein space. On the other hand, graph convolutional networks (GCNs) have been utilized to perform convolutions onunstructured data [11]. However, GCNs are also known to scale poorly and their state-of-the-art applications have beenlimited to the order of degrees of freedom [12]. Even such applications of GCNs have required distributed learning a r X i v : . [ phy s i c s . f l u - dyn ] J a n PREPRINT - J
ANUARY
5, 2021 … … …… Voronoi imageMask image Convolutional neural network
Voronoi image construction Deep learning based global field reconstruction s s s s … ts s s s s s s s Figure 1: Voronoi tessellation aided global data recovery from discrete sensor locations for a two-dimensional cylinderwake. Input Voronoi image is constructed from 8 sensors. The Voronoi image is then fed into a convolutional neuralnetwork with the mask image. In the mask image, a grid having a sensor (blue circle) has 1 while otherwise 0.on several hardware accelerators. This restricts their utilities for practical field reconstructions. A greater limitationstems from the fact that all methods fail to handle spatially moving sensors. This implies that applications of theseconventional tools are limited to a fixed sensor arrangement as that used in a training process. This limitation is a majorhindrance to practical use of these reconstruction techniques, since experimental sensor locations commonly evolveover time. The framework that integrates convolutional architectures with time-varying unstructured data is crucial forbridging the gap between structured field reconstructions and practical problems [13].In response to the aforementioned challenges, we propose a method that incorporates sparse sensor data into a CNNby approximating the local information onto a structured representation, while retaining the information of spatialsensor locations. This is achieved by constructing a Voronoi tessellation of the unstructured data set and adding theinput data field corresponding to spatial sensor locations through a mask. Voronoi tessellation projects local sensorinformation onto the structured-grid field based on the Euclidean distance. The currently technique achieves accuratefield reconstructions from arbitrary sensor locations and varying numbers of sensors with existing CNN architectures.The present formulation will impact a wide range of research fields that rely on fusing information from discrete sensors.
Our objective is to reconstruct a two-dimensional global field variable q ∈ R n x × n y from local sensor measurements s ∈ R n at locations x s i ∈ R , i = 1 , . . . , n . Here, n x and n y respectively denote the number of grid points in thehorizontal and vertical directions on a high-resolution field, and n indicates the number of local sensor measurements.The challenge here is to handle arbitrary numbers of sensors at any locations over the field. The number of sensors canbe changed in time and the sensors can be moving. The reconstruction process should be performed with only a singlemachine learning model to avoid retraining when sensors move or change their numbers.To achieve the goal of the present study, we utilize two input data files (images) for the CNN:1. Local sensor measurements projected on Voronoi tessellation s V = s V ( s ) ∈ R n x × n y .2. Mask image s m = s m ( { x s i } ni =1 ) ∈ R n x × n y which contains the local sensors positions, defined as s m ( { x s i } ni =1 ) = (cid:26) if x = x s i for any i , otherwise.The two input images above are provided to a machine learning model F such that q = F ( s V , s m ) ∈ R n x × n y , where q is the desired high-resolution field. With these two input vectors holding magnitude and position information of thesensors, the present idea can deal with arbitrary sensor locations and arbitrary numbers of sensors. It should be notedthat reconstruction cannot be achieved with conventional methods, including MLPs and CNNs due to their structuralconstraints. In what follows, we introduce the Voronoi tessellation and the machine learning framework, which are thetwo key components in the present approach. A sample code will be made publicly available at the time of publication. PREPRINT - J
ANUARY
5, 2021
To use a machine learning framework, the sensor data needs to be projected into an image file in an appropriate manner.The Voronoi tessellation [14] is a simple and spatially optimal projection of local sensor measurements onto the spatialdomain. This tessellation approach optimally partitions a given space E into n regions G = { g , g , ..., g n } usingboundaries determined by distances d among n number of sensors s [15]. Using a distance function d , the Voronoitessellation can be expressed as g i = { s i ∈ E | d ( s i , g i ) < d ( s i , g j ) , j (cid:54) = i } . (1)Hence, for a Euclidean domain, the Voronoi boundaries between sensors are their bisectors.The Voronoi tessellation has two important characteristics which provide the foundation for the present approximationconstructed with local sensor measurements [15]. One is that each area in a Voronoi tessellation is convex. Thisproperty enables us to establish a Voronoi tessellation using bisections in a simple manner. The other is that a Voronoitessellation does not include other sensors inside it when a circle centered at the vertex of a Voronoi region g passesneighboring sensors (Empty-circle property). This implies that each Voronoi region g is optimal for each sensor s in aVoronoi tessellation. Additional details on the mathematical theory of Voronoi tessellation can be seen in the study ofAurenhammer [15].The spatial domains to be discretized by the Voronoi tessellation and the high-resolution data are taken to be the samesize. All grid points in each portion of the Voronoi image have its representative sensor value. Since the Voronoitessellation provides a structured-grid representation of measurements from arbitrary placed sensors, the presentapproach enables us to use existing CNNs devised for structured grid data. Note that a Voronoi tessellation needs to beperformed solely once if sensors are stationary. If the number of sensors change over time, only local regions in directvicinity of added or removed sensors need to undergo tessellation in an adaptive manner. The aforementioned Voronoi tessellation enables the use of deep learning through a structured-grid CNN [8, 9, 16]. OurCNN design is composed of convolutional layers as shown in figure 1. The convolutional layer extracts key features ofinput data through filtering operations, q ( l ) ijm = ψ (cid:32) K − (cid:88) k =0 H − (cid:88) p =0 H − (cid:88) c =0 q ( l − i + p − C,j + c − C,k h pckm + b m (cid:33) , (2)where C = floor( H/ , q ( l − ijm and q ( l ) ijm are the input and output data at layer l , respectively; h pckm represents a filterof size of ( H × H × K ) and b m is the bias. In this study, the number of channels K is set to 1. The number of layers l max , the size of filter H , and the number of filter m are set to 9, 7, and 48, respectively, for the present study. Theoutput of each filter operation is passed through an activation function ψ which is chosen to be the rectified linear unit(ReLU) ψ ( z ) = max(0 , z ) [17]. Moreover, the ADAM optimizer [18] is utilized with an early stopping criterion [19]for the training process, which undergoes a three-fold cross validation [20]. Error are assessed using the L norm andensemble evaluations.For the input to F , we provide the Voronoi tessellation s V ∈ R n x × n y for sensor readings, with a mask image for inputsensor placements s m ∈ R n x × n y , i.e., q (0) = { s v , s m } ∈ R n x × n y × . The output from the CNN is a high-resolutionflow field q ∈ R n x × n y × that corresponds to the global field. In summary, the learning process of the present CNN canmathematically be formulated as w = argmin w || q − F ( { s V , s m } ; w ) || , (3)where w denotes weights (filters) of CNN. We are now able to take measurements and locations by projecting themonto Voronoi tessellation s V and the mask image s m , and reconstruct the global field variable q with deep learning F . We demonstrate the use of the present Voronoi-based CNN for global fluid flow reconstruction. Our examples includelaminar cylinder wake, geophysical data, and three-dimensional wall-bounded turbulence, which contain strongnonlinear dynamics over a wide range of spatio-temporal scales.3
PREPRINT - J
ANUARY
5, 2021
Voronoi input Mask n s e n s o r = n s e n s o r = ( a )( b ) ( c )( d ) Reference 0 0.51.0-0.5-1.0 n snapshot ( a )( b ) ( c )( d ) Figure 2: Voronoi-tessellation aided spatial data recovery of a two-dimensional cylinder wake with n sensor = 8 and .The input Voronoi image, input mask image holding sensor locations, and the reconstructed vorticity field are shown.The values underneath the reconstructed vorticity contour indicate L error norms. Dependence of the reconstructionability on the number of training snapshots is also shown. We first consider the Voronoi-based fluid flow reconstruction for a two-dimensional unsteady laminar cylinder wake at adiameter-based Reynolds number Re D = 100 . The training data set is prepared with a direct numerical simulation(DNS) [21, 22] which numerically solves the incompressible Navier–Stokes equations. In this study, we consider theflow field data around a cylinder body for the training and demonstration, i.e., ( ( x/D ) ∗ , ( y/D ) ∗ ) = [ − . , × [ − , and ( N x , N y ) = (192, 112). The vorticity field is used for both input and output attributes to the CNN model in this case.The training data spans approximately 4 vortex shedding periods. We examine the dependence of the reconstruction onthe amount of training data. The number of sensors n sensor is set to 8 and 16 with fixed input sensor locations for bothtraining and testing.The reconstructed fields are shown in figure 2. The reconstructed vorticity fields are in excellent agreement comparingto the reference vorticity field and in terms of the L error norm (cid:15) = || ω ref − ω ML || / || ω ref || . It can be seen from thereconstructed vorticity field that the vortices and shear layers in the near and far wakes are provided by the presentdeep learning technique with great accuracy and detail. The vorticity field for n sensor = 8 shows some low-levelreconstruction error due to the low number of sensors. When n sensor is doubled to 16, we observe the error in thereconstruction reduced by half with accurate recovery of the global flow field. Furthermore, reasonable data recoverycan be achieved with n sensor = 16 using as few as 50 training snapshots. Next, let us consider the NOAA sea surface temperature data collected from satellite and ship-based observations( ). The data is comprised of weekly observations of the sea surface temperaturewith a spatial resolution of × . We use 1040 snapshots spanning from 1981 to 2001, while the test snapshotsare taken from 2001 to 2018. For this example, we take the sensors to be placed randomly over the water. Thenumber of sensors for training is set to n sensor , train = { , , , , } with 5 different arrangements of sensorlocations, amounting to 25 cases. For the test data, we also consider unseen cases with 70 and 200 sensors suchthat n sensor , test = { , , , , , , } . These numbers of sensors for the test data correspond to {0.0154%,0.0309%, 0.0463%, 0.0772%, 0.108%, 0.154%, 0.309%} against the number of grid points over the field. We emphasizethat only a single machine learning model is trained and used for all combinations of sensor numbers and sensorplacements. ML reconstructionReference Linear interpolation Cubic interpolationError: 0.0617 Error: 0.245302520151050 Error: 0.252 ( ℃ ) Figure 3: Comparison of spatial data recovery for NOAA sea surface temperature with n sensor = 30 .4 PREPRINT - J
ANUARY
5, 2021
Unseen placementTrained placement n s e n s o r = n s e n s o r = n s e n s o r = I npu t C NN n s e n s o r = Unseen number of sensors
Figure 4: Voronoi based spatial data recovery of NOAA sea surface temperature. We show the representativereconstructed fields with n sensor = 100 which corresponds to the number of sensors contained in the training data and n sensor = { , } which correspond to cases not available in the training data set.Let us demonstrate the global sea surface temperature reconstruction in figure 3. As a test case, we use a low n sensor = 30 . The reconstructed global temperature field by the present model shows great agreement with the referencedata. This figure also reports the L error norm (cid:15) = || T ref − T ML || / || T ref || , where T ref and T ML are respectivelythe reference and reconstructed temperature fields. We here also compare our results to standard linear and cubicinterpolation. Since those are simple methods, fine structures cannot be recovered and the L errors are larger thanthat of the present method. These trends are noticeable from comparing the zoomed-in temperature contours. Theinterpolation schemes are unable to reconstruct the fine-grained features of the temperature fields accurately. However,the proposed technique performs very well. In addition to enhanced reconstruction, the present method is able to recoverthe whole field, while classical interpolation methods cannot extrapolate beyond the convex hull covered by the sensors,as evident from figure 3. This observation also speaks to the significant advantage of the present model.Next, let us assess how the current approach performs when the number of sensors are changed and when the sensorsare in motion. We present the results for these cases in figure 4. With n sensor = 100 being the number of sensorsobserved during training, the reconstructed sea surface temperature field is in agreement with the reference field forboth trained (left) and unseen sensor (middle) placements. The unseen placement cases correspond to instances ofsensors having moved. Despite the input Voronoi tessellation being significantly modified with the displaced sensors,the reconstruction is still successful. This highlights the effective use of the mask image holding information on sensorlocations. What is also noteworthy is that successful reconstructions can be achieved with n sensor = 70 and , whichare numbers of sensors unseen during training. This corresponds to situations where sensors may come online or gooffline during deployment.The relationship between the number of input sensors and the L error norm is also investigated. The error level ofthe test data with unseen placements (orange curve) is higher than that with trained placements (blue curve) as shownin figure 4. However, the reconstruction capability is comparable to the cases with n sensor = 100 , even when thenumber of unseen sensors reaches a larger number of 200. This result suggests that the present approach employing theVoronoi input and the mask image is robust for data sets where the number of sensors and the sensor placements vary5 PREPRINT - J
ANUARY
5, 2021
Unseen placementReference0 2 4-2 I npu t C NN n sensor = 150 n sensor = 250 -4 I npu t C NN Trained placement n sensor = 200 n sensor = 200 Unseen number of sensors
Figure 5: Voronoi tessellation-aided data recovery of turbulent channel flow. Considered are x − y cross sectionsof streamwise velocity fluctuation u (cid:48) reconstructed with n sensor = 200 (trained number of sensors) and n sensor = { , } (untrained number of sensors). The error convergence over n sensor is also shown.considerably. It also demonstrates the advantage of the present idea that a single trained model alone can handle theglobal field reconstruction for arbitrary number of sensors and time-varying positions. The above two problems contain strong periodicity in time, appearing as periodic vortex shedding and seasonalperiodicity. To further challenge the present approach, let us consider a chaotic and dynamically rich phenomenonof turbulent channel flow. The flow field data is obtained by a direct numerical simulation of incompressible flow ina channel at a friction Reynolds number of Re τ = 180 . Here, x, y, and z directions are taken to be the streamwise,wall-normal, and spanwise directions. The size of the computational domain and the number of grid points are ( L x , L y , L z ) = (4 πδ, δ, πδ ) and ( N x , N y , N z ) = (256 , , , respectively, where δ is the half width of thechannel. Details of the simulation can be found in [23]. For the present study, x − y section of a subspace is used forthe training process, i.e., x, y ∈ [0 , πδ ] × [0 , δ ] with ( N ∗ x , N ∗ y ) = (128 , . The extracted subdomain maintains thesame turbulent characteristics of the channel flow over the original domain, due to the symmetry of statistics in the y direction and homogeneity in the x direction [16]. We consider here the fluctuating component of an x − y sectionalstreamwise velocity u (cid:48) as the variable of interest. For training, we use n snapshot = 10 000 . The numbers of sensorsfor training data are chosen to be n sensor , train = { , , } with 5 different cases of sensor placements. For thetest data, we also consider the use of 150 and 250 sensors with both trained and untrained sensor placements suchthat n sensor , test = { , , , , } . This setting allows us to assess the robustness of our approach for variednumbers of sensor inputs analogous to Example 2.The performance of Voronoi-assisted CNN-based spatial data recovery for turbulent channel flow is summarized infigure 5 for n sensor = { , , } . These numbers of sensors amount to 2.44%, 3.26%, and 4.07% with respect tothe number of grid points over the field. We observe that finer flow features can be accurately reconstructed from just200 sensors indicating a remarkable degree of sparsity in measurement. Although error levels for unseen placement(i.e., the same number of sensors but at different locations) is higher than that for trained sensor placement, similartrends are obtained. Notably, reasonable reconstruction for both unseen number of sensors and unseen input sensorplacement as shown in the results for n sensor = { , } . As these results suggest, the present approach is a powerfultool for global reconstruction of complex flow fields from sparse sensor measurements.6 PREPRINT - J
ANUARY
5, 2021 κ = C a s e I C a s e II κ = κ = Figure 6: Robustness of the present reconstruction technique for noisy input for the example of turbulent channel flow.Input Voronoi tessellations (top) and reconstructed x − y sectional flow fields (bottom) are shown for Cases I and IIwith κ = 0 . , . , and . . Reference solution is shown in figure 5.To further assess the practical application of the present model, we analyze the effect of input noise on reconstruction.Here, we consider the influence of two types of perturbations to the training data. Case I: Add noise to local sensormeasurements s m before performing the Voronoi tessellation; and Case II: Add noise to the Voronoi tessellation input s V . For Cases I and II, the error is assessed as (cid:15) I = || q ref − F ( s V ( s + κ n ) , s m ) || / || q ref || , (4) (cid:15) II = || q ref − F ( s V + κ n , s m ) || / || q ref || , (5)respectively, where κ is the magnitude of the noise and n is the Gaussian distributed noise.The reconstructed turbulent flows for n sensor = 200 with the noisy inputs are summarized in figure 6. As shown,the input Voronoi tessellations defined by Case II exhibit noisy features compared to those utilized by Case I sincethe noise for Case II is added after the preparation of the Voronoi tessellations. The influence of noise for Case I islarger than that of Case II. This is caused by the fact that the present CNN is trained for learning the relationshipbetween the input Voronoi images and high-resolution flow fields, which implies that large perturbations to the sensormeasurements produce greater error to the input images. If robustness is desired, we recommend adding noise to thesensor measurements prior to the preparation of the Voronoi-based input images during the training process. Overall,the present reconstruction technique is found to be robust against noise. Reconstruction of a global field variable from an arbitrary collection of sensors has been a longstanding challengein engineering and the sciences. In order to address this problem, we presented a data-driven global reconstructiontechnique comprised of Voronoi tessellation and CNN. The present method relies on inputs of mask images holding thesensor location and Voronoi images representing the sensor measurements. The use of Voronoi tessellation translatesthe input sensor data to be represented on a uniform grid, which then enables the applications of CNNs to derive deeplearning based reconstruction models. Three examples of global flow reconstruction from local sensor measurementsdemonstrated the accuracy and robustness of the current method.Since CNNs have a large collection of toolsets from the image processing community, the present reconstructionapproach is significant from the point of view of merging image processing with sensor data analytics. This perspectivenow allows engineers and scientists to move through the wealth of local and global measurements using data-driventechniques. Furthermore, Voronoi tessellation has the beneficial property of being able to discretize the spatialinformation in an adaptive manner only where sensor arrangements change in time. This provides computationalsavings and an opportunity to develop spatially adaptive techniques.The present data-driven approach was demonstrated with a set of local sensor measurements. As this method performsspatial field reconstruction at each time, changes in the number of sensors or motion of the sensors are easily accommo-dated. Having this flexibility allows for extensions to incorporate other types of measurements, such as under-resolvedsatellite based measurements or particle image velocimetry. The current formulation also opens a path to incorporateintelligent sensor placements [1] to further reduce reconstruction error and enhance the robustness with data redundancy.The power and simplicity of the present approach will support scientific endeavor across a wide range of studies toprovide real-time knowledge of the global field. 7
PREPRINT - J
ANUARY
5, 2021
Acknowledgement
KF and KF thank the support from Japan Society for the Promotion of Science (18H03758). RM and NR weresupported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research,under Contract DE-AC02-06CH11357. RM acknowledges support from the ALCF Margaret Butler Fellowship. KTacknowledges the support from the US Army Research Office (W911NF-17-1-0118) and US Air Force Office ofScientific Research (FA9550-16-1-0650).
References [1] K. Manohar, B. W. Brunton, J. N. Kutz, and S. L. Brunton. Data-driven sparse sensor placement for reconstruction:Demonstrating the benefits of exploiting known patterns.
IEEE Control Systems Magazine , 38(3):63–86, 2018.[2] K. Akiyama, A. Alberdi, W. Alef, K. Asada, R. Azulay, A.-K. Baczko, D. Ball, M. Balokovi´c, J. Barrett, D. Bintley,et al. First M87 event horizon telescope results. III. Data processing and calibration.
The Astrophysical JournalLetters , 875(1):L3, 2019.[3] M. T. Alonso, P. López-Dekker, and J. J. Mallorquí. A novel strategy for radar imaging based on compressivesensing.
IEEE Transactions on Geoscience and Remote Sensing , 48(12):4285–4295, 2010.[4] K. V. Mishra, A. Kruger, and W. F. Krajewski. Compressed sensing applied to weather radar. In , pages 1832–1835. IEEE, 2014.[5] K. Fukami, K. Fukagata, and K. Taira. Assessment of supervised machine learning for fluid flows.
Theor. Comp.Fluid Dyn. , 34(4):497–519, 2020.[6] Y. LeCun, Y. Bengio, and G Hinton. Deep learning.
Nature , 521(7553):436–444, 2015.[7] S. L. Brunton, B. R. Noack, and P. Koumoutsakos. Machine learning for fluid mechanics.
Annu. Rev. Fluid Mech. ,52:477–508, 2020.[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc.IEEE , 86(11):2278–2324, 1998.[9] K. Fukami, K. Fukagata, and K. Taira. Super-resolution reconstruction of turbulent flows with machine learning.
J. Fluid Mech. , 870:106–120, 2019.[10] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagation errors.
Nature ,322:533—536, 1986.[11] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip. A comprehensive survey on graph neural networks.
IEEE Transactions on Neural Networks and Learning Systems , 2020.[12] T. Mallick, P. Balaprakash, E. Rask, and J. Macfarlane. Transfer learning with graph neural networks for short-termhighway traffic forecasting. arXiv:2004.08038 , 2020.[13] X. Chai, H. Gu, F. Li, H. Duan, X. Hu, and K. Lin. Deep learning for irregularly and regularly missing datareconstruction.
Sci. Rep. , 10(1):1–18, 2020.[14] G. Voronoi. New applications of continuous parameters to the theory of quadratic forms. first thesis. on someproperties of perfect positive quadratic forms.
Journal fur die Reine und angewandte Mathematik , 133:97–178,1908.[15] F. Aurenhammer. Voronoi diagrams—a survey of a fundamental geometric data structure.
ACM ComputingSurveys (CSUR) , 23(3):345–405, 1991.[16] K. Fukami, K. Fukagata, and K. Taira. Machine-learning-based spatio-temporal super resolution reconstruction ofturbulent flows.
J. Fluid Mech. , 909:A9, 2021.[17] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines.
Proc. Int. Conf. Mach.Learn. , pages 807–814, 2010.[18] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 , 2014.[19] L. Prechelt. Automatic early stopping using cross validation: quantifying the criteria.
Neural Netw. , 11(4):761–767,1998.[20] S. L. Brunton and J. N. Kutz.
Data-driven science and engineering: Machine learning, dynamical systems, andcontrol . Cambridge University Press, 2019.[21] K. Taira and T. Colonius. The immersed boundary method: A projection approach.
J. Comput. Phys. , 225(2):2118–2137, 2007. 8
PREPRINT - J
ANUARY
5, 2021[22] T. Colonius and K. Taira. A fast immersed boundary method using a nullspace approach and multi-domainfar-field boundary conditions.
Comput. Methods Appl. Mech. Eng. , 197:2131–2146, 2008.[23] K. Fukagata, N. Kasagi, and P. Koumoutsakos. A theoretical prediction of friction drag reduction in turbulent flowby superhydrophobic surfaces.