Julien Straubhaar
University of Neuchâtel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julien Straubhaar.
Computers & Geosciences | 2012
Alessandro Comunian; Philippe Renard; Julien Straubhaar
One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.
Computers & Geosciences | 2013
Eef Meerschman; Guillaume Pirot; Gregoire Mariethoz; Julien Straubhaar; Marc Van Meirvenne; Philippe Renard
The Direct Sampling (DS) algorithm is a recently developed multiple-point statistical simulation technique. It directly scans the training image (TI) for a given data event instead of storing the training probability values in a catalogue prior to simulation. By using distances between the given data events and the TI patterns, DS allows to simulate categorical, continuous and multivariate problems. Benefiting from the wide spectrum of potential applications of DS, requires understanding of the user-defined input parameters. Therefore, we list the most important parameters and assess their impact on the generated simulations. Real case TIs are used, including an image of ice-wedge polygons, a marble slice and snow crystals, all three as continuous and categorical images. We also use a 3D categorical TI representing a block of concrete to demonstrate the capacity of DS to generate 3D simulations. First, a quantitative sensitivity analysis is conducted on the three parameters balancing simulation quality and CPU time: the acceptance threshold t, the fraction of TI to scan f and the number of neighbors n. Next to a visual inspection of the generated simulations, the performance is analyzed in terms of speed of calculation and quality of pattern reproduction. Whereas decreasing the CPU time by influencing t and n is at the expense of simulation quality, reducing the scanned fraction of the TI allows substantial computational gains without degrading the quality as long as the TI contains enough reproducible patterns. We also illustrate the quality improvement resulting from post-processing and the potential of DS to simulate bivariate problems and to honor conditioning data. We report a comprehensive guide to performing multiple-point statistical simulations with the DS algorithm and provide recommendations on how to set the input parameters appropriately.
Mathematical Geosciences | 2013
Julien Straubhaar; Alexandre Walgenwitz; Philippe Renard
Multiple-point statistics are widely used for the simulation of categorical variables because the method allows for integrating a conceptual model via a training image and then simulating complex heterogeneous fields. The multiple-point statistics inferred from the training image can be stored in several ways. The tree structure used in classical implementations has the advantage of being efficient in terms of CPU time, but is very RAM demanding and then implies limitations on the size of the template, which serves to make a proper reproduction of complex structures difficult. Another technique consists in storing the multiple-point statistics in lists. This alternative requires much less memory and allows for a straightforward parallel algorithm. Nevertheless, the list structure does not benefit from the shortcuts given by the branches of the tree for retrieving the multiple-point statistics. Hence, a serial algorithm based on list structure is generally slower than a tree-based algorithm. In this paper, a new approach using both list and tree structures is proposed. The idea is to index the lists by trees of reduced size: the leaves of the tree correspond to distinct sublists that constitute a partition of the entire list. The size of the indexing tree can be controlled, and then the resulting algorithm keeps memory requirements low while efficiency in terms of CPU time is significantly improved. Moreover, this new method benefits from the parallelization of the list approach.
Mathematical Geosciences | 2014
Julien Straubhaar; Duccio Malinverni
Multiple-point statistics (MPS) allows simulations reproducing structures of a conceptual model given by a training image (TI) to be generated within a stochastic framework. In classical implementations, fixed search templates are used to retrieve the patterns from the TI. A multiple grid approach allows the large-scale structures present in the TI to be captured, while keeping the search template small. The technique consists in decomposing the simulation grid into several grid levels: One grid level is composed of each second node of the grid level one rank finer. Then each grid level is successively simulated by using the corresponding rescaled search template from the coarse level to the fine level (the simulation grid itself). For a conditional simulation, a basic method (as in snesim) to honor the hard data consists in assigning the data to the closest nodes of the current grid level before simulating it. In this paper, another method (implemented in impala) that consists in assigning the hard data to the closest nodes of the simulation grid (fine level), and then in spreading them up to the coarse grid by using simulations based on the MPS inferred from the TI is presented in detail. We study the effect of conditioning and show that the first method leads to systematic biases depending on the location of the conditioning data relative to the grid levels, whereas the second method allows for properly dealing with conditional simulations and a multiple grid approach.
Environmental Modelling and Software | 2015
Gregoire Mariethoz; Julien Straubhaar; Philippe Renard; Tatiana Chugunova; Pierre Biver
In the last years, the use of training images to represent spatial variability has emerged as a viable concept. Among the possible algorithms dealing with training images, those using distances between patterns have been successful for applications to subsurface modeling and earth surface observation. However, one limitation of these algorithms is that they do not provide a precise control on the local proportion of each category in the output simulations. We present a distance perturbation strategy that addresses this issue. During the simulation, the distance to a candidate value is penalized if it does not result in proportions that tend to a target given by the user. The method is illustrated on applications to remote sensing and pore-scale modeling. These examples show that the approach offers increased user control on the simulation by allowing to easily impose trends or proportions that differ from the proportions in the training image. A method to control local proportions with training image based geostatistical simulations.Allows imposing non-stationary features in the presence of a stationary training image.Global as well as local proportions can be accurately controlled.
Water Resources Research | 2015
Guillaume Pirot; Julien Straubhaar; Philippe Renard
A new method is proposed to produce three-dimensional facies models of braided-river aquifers based on analog data. The algorithm consists of two steps. The first step involves building the main geological units. The production of the principal inner structures of the aquifer is achieved by stacking Multiple-Point-Statistics simulations of successive topographies, thus mimicking the major successive flooding events responsible for the erosion and deposition of sediments. The second step of the algorithm consists of generating fine scale heterogeneity within the main geological units. These smaller-scale structures are generated by mimicking the trough-filling process occurring in braided rivers; the imitation of the physical processes relies on the local topography and on a local approximation of the flow. This produces realistic cross-stratified sediments, comparable to what can be observed in outcrops. The three main input parameters of the algorithm offer control over the proportions, the continuity and the dimensions of the deposits. Calibration of these parameters does not require invasive field measurements and can rely partly on analog data.
Water Resources Research | 2017
C. Jäggli; Julien Straubhaar; Philippe Renard
Solving inverse problems in a complex, geologically realistic, and discrete model space and from a sparse set of observations is a very challenging task. Extensive exploration by Markov chain Monte Carlo (McMC) methods often results in considerable computational efforts. Most optimization methods, on the other hand, are limited to linear (continuous) model spaces and the minimization of an objective function, what often proves to be insufficient. To overcome these problems, we propose a new ensemble-based exploration scheme for geostatistical prior models generated by a multiple-point statistics (MPS) tool. The principle of our method is to expand an existing set of models by using posterior facies information for conditioning new MPS realizations. The algorithm is independent of the physical parametrization. It is tested on a simple synthetic inverse problem. When compared to two existing McMC methods (iterative spatial resampling (ISR) and Interrupted Markov chain Monte Carlo (IMcMC)), the required number of forward model runs was divided by a factor of 8–12.
International Journal of Computer Mathematics | 2007
Julien Straubhaar
This paper is devoted to the study of some preconditioners for the conjugate gradient algorithm used to solve large sparse linear and symmetric positive definite systems. The construction of a preconditioner based on the Gram–Schmidt orthogonalization process and the least squares method is presented. Some results on the condition number of the preconditioned system are provided. Finally, numerical comparisons are given for different preconditioners.
Environmental Modelling and Software | 2016
Fabio Oriani; Andrea Borghi; Julien Straubhaar; Gregoire Mariethoz; Philippe Renard
The direct sampling (DS) multiple-point statistical technique is proposed as a non-parametric missing data simulator for hydrological flow rate time-series. The algorithm makes use of the patterns contained inside a training data set to reproduce the complexity of the missing data. The proposed setup is tested in the reconstruction of a flow rate time-series while considering several missing data scenarios, as well as a comparative test against a time-series model of type ARMAX. The results show that DS generates more realistic simulations than ARMAX, better recovering the statistical content of the missing data. The predictive power of both techniques is much increased when a correlated flow rate time-series is used, but DS can also use incomplete auxiliary time-series, with a comparable prediction power. This makes the technique a handy simulation tool for practitioners dealing with incomplete data sets. A resampling technique is applied to missing flow rate data simulation.The proposed technique generates realistic temporal data patterns.Generally, the statistical content is entirely recovered even in large gaps.The setup can use an auxiliary time-series to condition the simulation.An incomplete auxiliary time-series can be used, with increased prediction power.
parallel computing | 2008
Julien Straubhaar
This paper is devoted to the study of some preconditioned conjugate gradient algorithms on parallel computers. The considered preconditioners (presented in [J. Straubhaar, Preconditioners for the conjugate gradient algorithm using Gram-Schmidt and least squares methods, Int. J. Comput. Math. 84 (1) (2007) 89-108]) are based on incomplete Gram-Schmidt orthogonalization and least squares methods. The construction of the preconditioner and the resolution are treated separately. Numerical tests are performed and speed-up curves are presented in order to evaluate the performance of the algorithms.