Active Learning over DNN: Automated Engineering Design Optimization for Fluid Dynamics Based on Self-Simulated Dataset
AActive Learning over DNN: Automated Engineering Design Optimization forFluid Dynamics Based on Self-Simulated Dataset
Yang Chen*
Cranbrook Schools
Abstract
Optimizing fluid-dynamic performance is an im-portant engineering task. Traditionally, expertsdesign shapes based on empirical estimations andverify them through expensive experiments. Thiscostly process, both in terms of time and space,may only explore a limited number of shapes andlead to sub-optimal designs. In this research, atest-proven deep learning architecture is appliedto predict the performance under various restric-tions and search for better shapes by optimiz-ing the learned prediction function. The majorchallenge is the vast amount of data points DeepNeural Network (DNN) demands, which is im-provident to simulate. To remedy this drawback,a Frequentist active learning is used to exploreregions of the output space that DNN predictspromising. This operation reduces the number ofdata samples demanded from 8000 (Stephan Eis-mann, 2017) to 625. The final stage, a user inter-face, made the model capable of optimizing withgiven user input of minimum area and viscosity.Flood fill is used to define a boundary area func-tion so that the optimal shape does not bypassthe minimum area. Stochastic Gradient LangevinDynamics (SGLD) is employed to make sure theultimate shape is optimized while circumventingthe required area. Jointly, shapes with extremelylow drags are found explored by a practical userinterface with no human domain knowledge andmodest computation overhead.
Fluid Mechanics is a resource consuming process. Navier-Stokes Equation does not have closed form solutions be- *independent first author of the presented work. Secondaryschool diploma candidate at Cranbrook Kingswood Upper School,Bloomfield Hills, Michigan cause of its chaotic nature. As for its numerical solutions,a general procedure is absent.Traditionally, the method of fluid dynamic optimization isfor the experts to estimate based on empirical accountsand test potential possibilities with Wind Tunnel Testing(WTT). This approach is both costly and uncertain, as theprototyping involves the excessive use of building materialsand the final result may not be proved the most optimizeddue to human incompleteness.New attempts to expedite the design include replacingWTT with computational simulations, using, for instance,MATLAB. But, still, the numerical solutions for Navier-Stokes are unstable and the drawbacks for empirical designremain. Automation attempts are represented by Bayesianoptimization (Stephan Eismann, 2017). In this study, aBayesian optimization model is applied to automate the de-sign process. However, their model is sample-heavy, whichleads to the undesirably exorbitant simulation cost.To remedy these drawbacks, the presented study exploresdesign automatic algorithms that design shapes with de-sired aerodynamic performance with reduced human inputand computation consumption. Incorporating active learn-ing ideas with DNN to map the shape and drag coefficientsamples from input, in the form as latent dimensions, themodel in this study is able to search for optimal shapes. AFrequentist active learning method facilitates the model fit-ting by focusing on only the potential output space regions,reducing the cost for producing excessive data points.
This research differs from and builds on top of the pre-vious studies in the following ways:1.
MATLAB is utilized to form a new self-simulateddataset that includes various settings and object shapes,creating a template that can generate physical models thatcan reduce the costs of actually making desired objects andevaluating the attributes of them in reality. This approachis both less time-consuming and costly material-wise. A new system of algorithm is raised to make the pro-cess of engineering optimization as automatic, and compu-tational and sample economic as possible. DNN forms thecore of the presented system with other self-determinant a r X i v : . [ c s . C E ] J a n onditions to decide where the search should focus on. Insuch way, we make sure that the best fitting of the correla-tion between θ s and drags can be found, while other prob-lems, such as local minimum, out-of-the-reasonable-rangedilemma and so on, shall be avoided. The manner machine learning is applied in aerodynamicengineering optimization is new. The latest such workshows a different approach to the problem (Stephan Eis-mann, 2017). The simulation process is different in thata template to simulate the physical properties of certainfluid environments is targeted in thie research. We employDNN that only gives moderate consideration to bias to findthe correlation between θ and drag with a much smallerdataset, while still being able to predict on an comparablyaccurate basis. Other works are more engineering-based,not showing significant contributions to automate such in-dustrial design. Fluid dynamics focuses on the study of moving air or fluid(Dunbar, 2015). It investigates the interactions betweenfluid environment and solid object that moves through(Timmermans, 2015). It is important because of its vitalapplications in fields such as aerospace engineering and ve-hicle production (NASA, 2016). By studying the effects offluid moving past a solid object, engineers can optimizetheir designs of aerodynamic machines (Shieh, 2009). Onecore objective in this process is to minimize the drag ofsolid objects under a number of realistic restraints (Lee-ham, 2017). However, the key equation, Navier-StokesEquation itself that fluid mechanic works are based on hasproblems. Not only does it lack a set of closed-form solu-tions because of its chaotic nature, but the general rules ofsolving the equation numerically also does not exist (Fef-ferman, 2017). Moreover, the equation remains one ofthe six mathematically unproven hypothesis listed by ClayMathematics Institute (Jaffe, 2000). These controversiesmean that experts have to design shapes based off of theirexperience and verify estimated shapes in a simulator orwind tunnel test (WTT) for multiple rounds.Figure 1: Wind tunnel concept. A picture that illustrates awind tunnel that tests the stream line design and many physical data of a ship (NASA)In a WTT, the subject of the test is placed in an openchamber. Examining how air propelled by a specializedfan would flow through the subject (Smith, 2008) pulls to-gether a complete picture of the aerodynamic forces andother physical conditions on the model (Rossiter, 1964).Nevertheless, such a expensive method still requires ex-pert knowledge. Results of such empirical estimations maynot even be accurate after repeated prototyping. In gen-eral, the overall relationships between the shapes and dragsare costly to search for under realistically given restrictions(Jenkins et al., 2016).A group of researchers from Stanford University usesBayesian optimization to accomplish this goal. Com-pared with the usual human search (J. Snoek & Adams.,2012b), the Bayesian model requires much fewer samples(Stephan Eismann, 2017). This late research in engineeringoptimization utilizes a statistical model to ease the complexenumeration process of samples while applying Bayesianoptimization to find desirable aerodynamic design.Figure 2: Bayesian Optimization Fitting Process,Performed as a Form of Active LearningBayesian Optimization is also a symbolic representativeof the application of active learning in Machine Learning(Martin Pelikan, 1999). It is by nature a Gausian methodtrained with parameter families based on ax + b, a ∼ N (0 , , b ∼ N (0 , As is seen in Figure 2 (Kaul et al., 2017). The regressionis narrowed down sequentially with the addition of everynew data point in a specific region that would provide themost advancement of regression fitting. This method sig-nificantly reduces the required number of data points byactively searching for desirable new input data for furtherraining. However, Eismann’s study is largely based onprocessing the simulated two dimensional images, which,because Bayesian model often shows an exponential in-crease of complexity with additional dimensions, requiresa higher number of training data points to achieve completetraining.
The core of our research is a DNN incorporated into aself-determinant loop that checks conditions and producesthe optimized shape of the object under any given circum-stance, which is set in MATLAB.
Data SimulationOn MatLab:1. Physical Conditions2. Simulation
Simulation Preprocessing
Check for any necessary preprocessingRemove any non-representative datasVisualization by linear regression model
Core
Multi-Layer Deep NetworkReLU as Activation Function
Integration
Automated Parameter ModificationsSelf-Determinant Training LoopOutput Foolproven Optimized Shape *More branches of steps will be elaborated and illustrated below
Figure 3: Flow of the system, procedure outline. self-produced .Figure 3 shows the flow of the process. Our research fo-cuses on making a comprehensive system that automati-cally finds an optimized shape under any given fluid envi-ronment. The process of aerodynamic optimization is sim-plified and made more digital based.
Resistances of objects of certain shapes in fluid, such as airand water, are studied in order for this research to be gen-eralizable. The performance of individual design is definedby its drag coefficients ( C D ) (Landau & Lifshitz, October,2013), which stand for the resistance an object encountersin fluid dynamic environment. C D = F D A × ρV For reference, F D stands for the drag force created by thefluid environment in the direction contrast the movement ofthe object. C D serves as the drag coefficient that dependson velocity, viscosity, and other parameters of the referencearea. A , ρ and V are all environmental factors that reflectthe reference area, density of the fluid and flow velocityrelative to the object, respectively.Incompressible flow in a volume satisfies the Navier-Stokesequation (Constantin & Foias, 1988). (cid:37) (( ∂u∂t ) + ( u · ∇ ) = −∇ p + v ∇ u ) In this equation: 1. u represents the velocity of the flow, and is a vector field V (cid:55)→ R for 3-dimensional flow problems, or V (cid:55)→ R for2-dimensional flow problems. p represents the pressure in the volume, and is a function V (cid:55)→ R . v is the viscosity of the fluid (in this study, we set the vis-cosity to be v = 0.2 as fluid water under room temperatureto generalize our study).The Navier-Stokes equation is the equivalent of Newtonssecond law for fluids. To interpret the equation, we remarkthat ∂u∂t the change in the flow with respect to time, and ( u · ∇ ) u represents the convective acceleration of the fluid.The right hand side represents the forces acting on the fluid: ∇ u is the difference between velocity of a point and themean velocity of its neighborhood. This term encouragesthe vector field to become uniform in the absence of otherinfluence factors. ∇ p is the gradient of the pressure, anddrives fluid motion.MATLAB is sued to simulate the dynamics of an objecttraveling through a fluid environment. To simulate theobject, a geometry that represents the object and relevantboundary conditions is created. Then we convert the geom-etry into a mesh that represents the state of each point in thesystem. QuickerSim simulates fluid dynamics according toNavier-Stokes equation, and read off end results includingdrag through the provided API. Based on this setup, MAT-LAB generates a dataset of shapes and their correspondingdrags. Linear regression is a linear combination of the features(Freedman, 2009). Linear regression expresses itself in theform below: f ( x ) = β + N (cid:88) j =1 x j β j It is also seen in the form as a matrix: f ( x ) = a t x The loss function of such regression method is calculatedin terms of the residual sum of squares, which is written inthe expression as following:
RSS ( β ) = n (cid:88) i =1 ( y i − β − p (cid:88) j =1 x ij β j ) This model is used first to approximate the correlation be-tween the actual drag values. We choose to use such modelbecause of its relatively low computational complexity aswell as its ability to straightforwardly graph the relation-ship between actual and predicted drag values, if there isny. Later, by fitting our four-dimensional θ data again withDNN, we can directly visualize if there is any veritable im-provement in our new model compared to the traditionalones.Figure 4: Linear regression result compared to real dragvalues, while object width is set to 0.18. self-produced .Figure 4 represents how the linear regression result fits thereal drag values. The closer all the blue dots are to the redcurve, f ( x ) = x , the more accurate the linear regressionprediction is. Some pattern is reflected on a primitive levelas showns in Figure 4Cross-validation is a statistic model, also known as rota-tion estimation, or out of sample testing (Geisser, 1993).The method tests the generalizability of certain predictions,which serves our purpose of the research well. In our re-search, the entire dataset of 625 θ -drag pairs all directlyserve as training data. Every four-dimensional θ has a cor-responding drag label.Figure 5: cross-validation and resampling. Rosaen (2018) Such a more expensive model as feed-forward DNN com-pared with other of the kind is suitable in our case, becauseour required computation is not as complicated as usual im-age processing tasks for two reasons. As is stated abovein the MATLAB simulation explanation, we only require625 samples to reach a fair prediction that is generaliz-able. This number is significantly smaller than that of dig-ital image processing which usually desire around 10000 two-dimensional samples. Moreover, our input data con-sist of a list of arrays, which has a lower computationalcost than two-dimensional images. Therefore, because ofthe remarkably lower computational cost as a nature of ourdataset, DNN is feasible.DNN is composed of Fully-Connected layer (FCL) whichcan be expressed in the following form: drag kj = f ( (cid:88) w kij · x k − j + b kj ) K represents a specific layer of the DNN, while j repre-sents the specific neuron the denoted variables are refer-ring to. W serves as a parameter that tells the connectionbetween the j th neuron on k th layer and the i th neuron on ( k − th layer, alongside b as the bias adjuster. As theshape indicates, DNN can somewhat be regarded as a CNNin computation or thinking.The formula can be extended to look something as the se-ries of equations below: Z (1) = f (1) ( W (0) X + b (0) ) ,Z (2) = f (2) ( W (1) Z (1) + b (1) ) , · · · Z ( L ) = f ( L ) ( W ( L − Z ( L − + b ( L − ) ,Y ( X ) = W ( L ) Z ( L ) + b ( L ) , While training, suitable activation functions is necessary.Rectified Linear Units (ReLU), represented by the expres-sion below, is suitable in this model: f ( x ) = max (0 , x ) (Vinod Nair, 2010). ReLU is a fairly popular kind of acti-vation function in neural network.ReLU, a single-sided function, has a constant-valued slopewhen x (cid:62) so that it does not have sigmoids predicamentof vanishing gradient. In the case of ReLU, only multi-plications and comparisons are processed so that we mayachieve a faster and more accountable convergence of re-sults. The active learning (AL) used in this research is a Frequen-tist approach. It differs from Gaussian AL with an expo-nential increase of complexity in accordance with dimen-sions. In this study, the applied model maps the input spacein a manner that only explores the specific regions indicatedby DNN so that bias is traded in for a lower computationcost in the equations: µ = 1 n (cid:80) x i (variance-significant); µ = 0 (bias-significant), thus decreasing the required sam-ple size to 625 data points. .3.1 Restrictions for Self-Adjusting Parameters Generally speaking, two parameters determine the way ourmodel trains itself: train step and variable initializations.In the case of following conditions, our model would au-tomatically determine to retrain itself: when the takenderivative is found to be zero; or the model is found tobe stuck in a local minimum.
After the first training round, there are two other possibili-ties, however, if not a single final optimized case is found,which are elaborated below:
Algorithm 1
Decreasing Loss Curve input dimension, (cid:46) m × matrix drag (cid:46) m × matrix output optimized step size power ← ST EP SIZE = {} ; SCORE = {} while step size < do step size ← − × power ST EP SIZE ← ST EP SIZE + { step size } power ← power + 1 L ← { , loss per epochs } score ← for i ← to length.L do if L [ i ] > L [ i − do score ← score + 1 end if end for SCORE ← SCORE + { score } for element : SCORE do temp ← ∞ if element ≤ temp do temp ← element holder ← index of temp end if end for end while step size ← ST EP SIZE [ holder ] return step size Firstly, there may exist no reasonable loss convergence inour tested cases. If the loss does not decrease in a reason-able distribution or is not reduced at all, our system willstart another round of training. In this round, the two train-step cases are le-2 and le-6, and so on until there is a rea-sonable convergence of loss. Figure 7 shows an example ofhow our model adjusts on its own in search of optimization:However, in some cases, it is also possible that there is noliable convergence until it is out of the range for logicaltrain step sizes. At such time, the system would automati-cally refresh its initializer so that there is a new initializa-tion to avoid zero derivatives.Secondly, too many reasonable convergences may be pro- duced. On the one hand, to collate the performance accu-racies, we juxtapose each of the test and train accuraciesin each plausible training, and find out which convergencehas the best general performance among all. On the otherhand, to compare the stabilities, we take the difference be-tween the level of test and train accuracies of each reason-able convergence respectively. The one that maintains rela-tively better train and test performances would be selected.Accuracies would be the priority factor of judgement.
Undesirable initialization, incomplete fit of model andother factors unlisted may also lead to sub-optimal shapes.To prevent these situations, the mechanism in Figure 8 il-lustrates the test if the found optimized shape actually havethe minimized drag in the given environment. Firstly, thefound shapes drag is the smaller than all existing shapes,it will be added to the original dataset and feedback to thetraining system for retraining. If the shape is actually alsothe global minimum in the new model, then it will be se-lected. Also, if the found optimized shapes drag is not infact the smallest compared to the previous drag values, thenthe found value will also be added to the training dataset.Our system would determine to get another round of retrainuntil a veritable drag-reduced shape is ensured.
Algorithm 2
Drag-Coefficient Minimization input min drag, min drag (cid:48) , dimension (cid:46) m × matrix drag (cid:46) m × matrix output min drag while min drag (cid:48) ≥ min drag do dimension = { dimension + dimension (cid:48) } drag = { drag + drag (cid:48) } train f : dimension → drag predict min drag (cid:48) end while (cid:46) repeat one more time once false min drag ← min drag (cid:48) return min drag The final stage of the research centers around the imple-mentation of a user interface that provides industrial de-signs based on boundary restriction given as input by theuser and produce the optimal shape that fits.
Industrial users may put in restrictions to produce an idealshape for their specific requirements, including viscosity offluid environment, and minimum body room area . Viscosity
Viscosity is defined in MATLAB and the userinput section is provided through Python interface. Withoutfurther ado, refer to 3.1 MATLAB simulation. inimum Area
To ensure that the result produced by themodel offers enough enclose area to contain user-providedshapes, we apply
Flood Fill algorithm as is visualized be-low in Figure 6 (Torbert, 2016).With these judgement being made in the process, as thefunction, g : r (cid:55)→ { , } , that maps the vector direction of descent to either , mean-ing the shape is in the boundary, or , the shape is not in theboundary.Figure 6: Flood Fill Visualization (JavaScript, 2019).This algorithm assures the parameter surrounds a given in-ner shape. In reality, each square in the figure stands fora pixel: say, if the red dot is a user input, flood fill satu-rates the area around the dot so that there is no open sec-tion where the flood simply penetrates the boundary, whichserves as a prerequisite for the minimum shape. Figure 7: Optimization with Restrictions, self-produced
A method called
Stochastic Gradient Langevin Dynam-ics (SGLD) is used to find optimization while successfullyavoiding bypassing the restricted areas given by the user(Welling & Yee W., 2011). The original gradient descent direction is given by r (cid:48) = r − ∇ r L . To avoid it to bypassthe restricted area, we use a half Gaussian noise shown by r (cid:48) + N (0 , (it is based on a second order Gaussian/Normal Distribu-tion: n ∼ N (0 , σ ) ) (Rasmussen, 2004). It is half becauseit is limited only to form the vectors that point to direc-tions ≤ ◦ to make sure the vector is descending, whichis achieved by arccos ( x, y ) (cid:107) x (cid:107)(cid:107) y (cid:107) ≤ ◦ .The resulting path travelled by these vectors, will travel dy-namically with a general path down the gradient direction. We use MATLAB to create 3D environmental simulationsand shadow the three-dimensional dynamic system into atwo-dimensional graph and fit a surface.Figure 8: MATLAB simulation examples top:
Color-coded map represents relative velocity ofenvironment against the object. bottom:
Geometricalmeshing of surrounding against moving objects. self-produced .In this case, only sample points is required for eachase of our training. Dataset of such size can be simulatedwith MATLAB on a single local laptop in less than 3h (2.2GHz Intel Core i7, 16 GB 2400 MHz DDR4).As Figure 8 shows above, MATLAB template produces asimulation on a random, non-repetitive basis as such, alongwith their representing force if drag, for further training.B-SPLINE does well to visualize the boundary conditionsof the 2D representation created.
Pre-training selection first takes place to eliminate simula-tions that is too close to singularity or off-scale to be formeaningful consideration.At this point, more accurate predictions of drag values from θ s are desirable. So, we develop a FCDNN built upon sixFCL’s which shows an optimized accuracy compared to thepeers. Input Layer
Hidden Layers
Hidden 1 Hidden 2 Hidden 3 Hidden 4
OutputLayer
Latent Dimension 1Latent Dimension 2Latent Dimension 3Latent Dimension 4
Figure 9: Optimized DNN architecture in the case of 0.18width. self-produced .Table 1 below shows examples of comparing performancebetween networks of different number of layers:Table 1: Performance of DNN Architectures forRegression LossNum.ofLayers TrainLoss(0.15Width) TestLoss(0.15Width) TrainLoss(0.18Width) TestLoss(0.18Width)Seven . E − . E − . E − . E − Six . E − . E − . E − . E − Five . E − . E − . E − . E − Four . E − . E − . E − . E − With two width cases × four types of deep network archi-tectures, eight different cases indicate where the optimizeddeep learning architecture can be achieved. As other differ-ent cases have been tested, there is a common trend shownby the two set of examples in Table 1. Six-layer DNN has significant improvement of performance in terms of MSEcompared to other architectures with fewer layers, whileanother additional layer shows no perceptible reduction inMSE in the case of 0.18 width and actually increases MSEin the 0.15 width case. Therefore, six-layer DNN has supe-riority in its notably low MSE and effectiveness .In both width cases and more, our AL over DNN modelwith its project-specific architecture is proven to makemore accurate prediction in comparison with other tradi-tional regression models, as is shown in Table 2:Table 2: Performance of Different ModelsLinearReg. OurModel0.15 Width Case Training . E − . E − . E − . E − . E − . E − . E − . E − Table 3 visualizes the comparisons of computation costs ofour study and a previous work based on Bayesian optimiza-tion. We reduce both simulation and training time by ex-tracting dimensional information from 2D objects throughthe fitted spline and train with DNN model.Table 3: Computation Cost Comparison with RelatedStudyMethod CPU Memory Simu-lation(hr) Train-ing(hr)BayesianOptimiza-tion Intel i7-3520Mwith 2.90GHz 16GB 16 1
ThisResearch
Intel i7-2400Mwith 2.20GHz 16GB
Comparison less less
Stacking of multiple different machine learning methodshave insignificant boost to the accuracy. Stacking and en-semble function to remove unshared shortcomings of eachmethod, which this study is not a suitable case. As for thebetter accuracy caused by the intrinsic advantage of othermethods themselves, the method shows better performancein most cases through direct comparisons.So with many factors taken into account, the six-layerFCDNN in this study is concluded an accountable modelwith consistent performances and high level of accura-cies in both train and test. This architecture improves theloss of our prediction to be generally under 0.0005. .3 Automated Engineering Optimization
The Frequentist AL boosts the automation of our system.When the search of a drag-minimized shape falls into a lo-cal minimum or when the initialization of the training leadsto a zero-derivative, the prediction result is not in its bestcase. Our system succeeds in avoiding such conditions.Figure 10: Active learning advancement process top:
Simulation results of the predicted optimized shape afterfirst round training; bottom:
The predicted optimizedresult after the final round of training. self-produced .As shown in Figure 10, in this first effort to optimize shape,drag is shown to be in Figure 10(a), which apparentlyis not indeed optimized and is captured by our mechanism.So it automatically goes into the second round of training,that reports the result below. After few rounds of activesearches, our system automatically comes to the shape inFigure 10(b), which has a drag of only , which is then proven to be the actual minimum. This reliability is simi-larly observed in other cases with widths of , which serves to support the efficacy of this method
The deep learning architecture in this research is new in itsaccuracy and stability performance. It perceptibly reducesthe loss of prediction thus increase the efficiency. Visual-ized with the data above, this presented model is able topredict the relationship between θ s and drags accuratelyand precisely.The application to use AI to find a fit for the correlationbetween θ s and drags is new to this research. We map amatrix of four-dimensional θ arrays to a column vector ofone-dimensional drags, instead of finding the fit of the en-tire process of objects moving through given fluid environ-ment. A relatively accurate result only requires a small datasize.This optimization system filters cases not yet fully trainedand makes sure that the optimized shape is actually foundwith the simulated information. This also means that oursystem has the ability to avoid bugs in similar traditionalalgorithms. The following are potential aspects for our system to digdeeper into the field of work.
So, supplementary parts may be used to boost theengineering performance of the shape. Consequently, amethod of finding such appropriate add-on’s may be stud-ied to include into our system. Since we often need to setrestrictions for shapes that we are optimizing to serve forspecific tasks, the ultimate shape we get may not be closeat all to the original global minimum of drag of our trainedmodel.
Other features may also be desired. For example,ability of an object maintaining its current height, stabilityor agility of an object. With varying purposes of the op-timization, these conditions can be set into the MATLABenvironment to be run through and tested the same way aswe do on the drags.
In the end, our research successfully demonstrates that amore systematic and automatic aerodynamic engineeringoptimization is feasible by getting a regression mean errorat below for most preset width cases and achievingactually drag-reduced shapes with the fit model through ourystematic model. Compared to the previous researches,we have the advantages of requiring less training samples,less computation costs and time while improving the au-tomation of engineering design and avoiding training bugs.On top of what we have already done, detailed additionsas described in the discussion section is able to further in-crease the comprehensiveness of our system. These futurebonus shall be achieved in a pretty similar way as we dohere with the drags, except with different simulation fea-tures.Our research successfully innovates on two things: findinga trend that relates to drag values, and searching for a drag-minimized shape. This provides a insight into how drag isinfluenced by objects shape with merely our machine learn-ing prediction. This process can be more straightforwardand practical than other methods that attempt to find suchcorrelation.
ACKNOWLEDGEMENTS
The author would like to offer his cordial gratitudes for theassistance of the research mentor, Shengjia Zhao, on thisresearch. He bears constructive advices to the author whenchallenges and confusions are to be dealt with.
References
Peter Constantin and Ciprian Foias. Navier-stokes equa-tions. University of Chicago Press, 1988.B. Dunbar. What is aerodynamics? 2015.URL .Charles L. Fefferman. Existence and smoothness of thenavierstokes equation. Clay Mathematics, Ins., 2017.David A. Freedman. Statistical models: Theory and prac-tice. page 26, 2009.Seymour Geisser. Predictive inference. 1993.H. Larochelle J. Snoek and R. P. Adams. Practical bayesianoptimization of machine learning algorithms. In
Ad-vances in neural information processing systems , pp.29512959, 2012b.Arthur M. Jaffe. The millennium grand challenge in math-ematics. volume 53, pp. 652–660. Clay Mathematics,Ins., 2000.Eloquent JavaScript. Flood fill. January 2019.Bruce Jenkins, Principal Analyst, and Ora Research. Timefor automotive oems to transition from wind tunnel test-ing to simulation-driven design.
Ora Research , 2016.Puneith Kaul, Daniel Golovin, and Greg Kochanski. Hy-perparameter tuning in cloud machine learning engineusing bayesian optimization. Google Cloud Platform,August 2017. L. D. Landau and E. M. Lifshitz. Fluid mechanics. Elsevier,October, 2013. ISBN 9781483140506.L. Leeham. Bjorn’s corner: Aircraft drag re-duction, part 2. 2017. URL https://leehamnews.com/2017/10/27/bjorns-corner-aircraft-drag-reduction-part-2 .Erick Cant-Paz Martin Pelikan, David E. Goldbergr. Boa:the bayesian optimization algorithm. volume 1, pp. 525–532. GECCO’99 Proceedings of the 1st Annual Confer-ence on Genetic and Evolutionary Computation, 1999.ISBN 1-55860-611-4.NASA. URL .NASA. Aerodynamics & aeroacoustics. (n.d.). 2016.URL .Carl Edward Rasmussen. Gaussian processes in machinelearning.
Advanced lectures on machine learning , pp.63–71, 2004.Karl Rosaen. K-fold cross-validation. 2018. URL http://karlrosaen.com/ml/learning-log/2016-06-20/ .J.E. Rossiter. Wind tunnel experiments on the flow overrectangular cavities at subsonic and transonic speeds.
RAE Technical Report No. 64037 , 1964.Jyh-Sherg Shieh. Fundamentals of fluid mechanics. 2009.Richard Smith. Wind tunnels and cfd. 2008.URL .Stefano Ermon Stephan Eismann, Stefan Bartzsch. Shapeoptimization in laminar flow with a label-guided varia-tional autoencoder. arXiv preprint arXiv:1712.03599v1 ,2017.M. Timmermans. Wind tunnel testing.
SHOL , 2015. URL .Shane Torbert. Applied computer science (2nd ed.), June2016.Geoffrey E. Hinton Vinod Nair. Rectified linear units im-prove restricted boltzmann machines. 2010.Max Welling and Teh Yee W. Bayesian learning viastochastic gradient langevin dynamics.