Featured Researches

Computational Engineering Finance And Science

Bayesian Surface Warping Approach For Rectifying Geological Boundaries Using Displacement Likelihood And Evidence From Geochemical Assays

This paper presents a Bayesian framework for manipulating mesh surfaces with the aim of improving the positional integrity of the geological boundaries that they seek to represent. The assumption is that these surfaces, created initially using sparse data, capture the global trend and provide a reasonable approximation of the stratigraphic, mineralisation and other types of boundaries for mining exploration, but they are locally inaccurate at scales typically required for grade estimation. The proposed methodology makes local spatial corrections automatically to maximise the agreement between the modelled surfaces and observed samples. Where possible, vertices on a mesh surface are moved to provide a clear delineation, for instance, between ore and waste material across the boundary based on spatial and compositional analysis; using assay measurements collected from densely spaced, geo-registered blast holes. The maximum a posteriori (MAP) solution ultimately considers the chemistry observation likelihood in a given domain. Furthermore, it is guided by an apriori spatial structure which embeds geological domain knowledge and determines the likelihood of a displacement estimate. The results demonstrate that increasing surface fidelity can significantly improve grade estimation performance based on large-scale model validation.

Read more
Computational Engineering Finance And Science

Bayesian Verification of Chemical Reaction Networks

We present a data-driven verification approach that determines whether or not a given chemical reaction network (CRN) satisfies a given property, expressed as a formula in a modal logic. Our approach consists of three phases, integrating formal verification over models with learning from data. First, we consider a parametric set of possible models based on a known stoichiometry and classify them against the property of interest. Secondly, we utilise Bayesian inference to update a probability distribution of the parameters within a parametric model with data gathered from the underlying CRN. In the third and final stage, we combine the results of both steps to compute the probability that the underlying CRN satisfies the given property. We apply the new approach to a case study and compare it to Bayesian statistical model checking.

Read more
Computational Engineering Finance And Science

Bayesian graph neural networks for strain-based crack localization

A common shortcoming of vibration-based damage localization techniques is that localized damages, i.e. small cracks, have a limited influence on the spectral characteristics of a structure. In contrast, even the smallest of defects, under particular loading conditions, cause localized strain concentrations with predictable spatial configuration. However, the effect of a small defect on strain decays quickly with distance from the defect, making strain-based localization rather challenging. In this work, an attempt is made to approximate, in a fully data-driven manner, the posterior distribution of a crack location, given arbitrary dynamic strain measurements at arbitrary discrete locations on a structure. The proposed technique leverages Graph Neural Networks (GNNs) and recent developments in scalable learning for Bayesian neural networks. The technique is demonstrated on the problem of inferring the position of an unknown crack via patterns of dynamic strain field measurements at discrete locations. The dataset consists of simulations of a hollow tube under random time-dependent excitations with randomly sampled crack geometry and orientation.

Read more
Computational Engineering Finance And Science

Bayesian neural networks for weak solution of PDEs with uncertainty quantification

Solving partial differential equations (PDEs) is the canonical approach for understanding the behavior of physical systems. However, large scale solutions of PDEs using state of the art discretization techniques remains an expensive proposition. In this work, a new physics-constrained neural network (NN) approach is proposed to solve PDEs without labels, with a view to enabling high-throughput solutions in support of design and decision-making. Distinct from existing physics-informed NN approaches, where the strong form or weak form of PDEs are used to construct the loss function, we write the loss function of NNs based on the discretized residual of PDEs through an efficient, convolutional operator-based, and vectorized implementation. We explore an encoder-decoder NN structure for both deterministic and probabilistic models, with Bayesian NNs (BNNs) for the latter, which allow us to quantify both epistemic uncertainty from model parameters and aleatoric uncertainty from noise in the data. For BNNs, the discretized residual is used to construct the likelihood function. In our approach, both deterministic and probabilistic convolutional layers are used to learn the applied boundary conditions (BCs) and to detect the problem domain. As both Dirichlet and Neumann BCs are specified as inputs to NNs, a single NN can solve for similar physics, but with different BCs and on a number of problem domains. The trained surrogate PDE solvers can also make interpolating and extrapolating (to a certain extent) predictions for BCs that they were not exposed to during training. Such surrogate models are of particular importance for problems, where similar types of PDEs need to be repeatedly solved for many times with slight variations. We demonstrate the capability and performance of the proposed framework by applying it to steady-state diffusion, linear elasticity, and nonlinear elasticity.

Read more
Computational Engineering Finance And Science

Beating the market with a bad predictive model

It is a common misconception that in order to make consistent profits as a trader, one needs to posses some extra information leading to an asset value estimation more accurate than that reflected by the current market price. While the idea makes intuitive sense and is also well substantiated by the widely popular Kelly criterion, we prove that it is generally possible to make systematic profits with a completely inferior price-predicting model. The key idea is to alter the training objective of the predictive models to explicitly decorrelate them from the market, enabling to exploit inconspicuous biases in market maker's pricing, and profit on the inherent advantage of the market taker. We introduce the problem setting throughout the diverse domains of stock trading and sports betting to provide insights into the common underlying properties of profitable predictive models, their connections to standard portfolio optimization strategies, and the, commonly overlooked, advantage of the market taker. Consequently, we prove desirability of the decorrelation objective across common market distributions, translate the concept into a practical machine learning setting, and demonstrate its viability with real world market data.

Read more
Computational Engineering Finance And Science

Bending behavior of additively manufactured lattice structures: numerical characterization and experimental validation

Selective Laser Melting (SLM) technology has undergone significant development in the past years providing unique flexibility for the fabrication of complex metamaterials such as octet-truss lattices. However, the microstructure of the final parts can exhibit significant variations due to the high complexity of the manufacturing process. Consequently, the mechanical behavior of these lattices is strongly dependent on the process-induced defects, raising the importance on the incorporation of as-manufactured geometries into the computational structural analysis. This, in turn, challenges the traditional mesh-conforming methods making the computational costs prohibitively large. In the present work, an immersed image-to-analysis framework is applied to efficiently evaluate the bending behavior of AM lattices. To this end, we employ the Finite Cell Method (FCM) to perform a three-dimensional numerical analysis of the three-point bending test of a lattice structure and compare the as-designed to as-manufactured effective properties. Furthermore, we undertake a comprehensive study on the applicability of dimensionally reduced beam models to the prediction of the bending behavior of lattice beams and validate classical and strain gradient beam theories applied in combination with the FCM. The numerical findings suggest that the SLM octet-truss lattices exhibit size effects, thus, requiring a flexible framework to incorporate high-order continuum theories.

Read more
Computational Engineering Finance And Science

Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction

We introduce Bi-GNN for modeling biological link prediction tasks such as drug-drug interaction (DDI) and protein-protein interaction (PPI). Taking drug-drug interaction as an example, existing methods using machine learning either only utilize the link structure between drugs without using the graph representation of each drug molecule, or only leverage the individual drug compound structures without using graph structure for the higher-level DDI graph. The key idea of our method is to fundamentally view the data as a bi-level graph, where the highest level graph represents the interaction between biological entities (interaction graph), and each biological entity itself is further expanded to its intrinsic graph representation (representation graphs), where the graph is either flat like a drug compound or hierarchical like a protein with amino acid level graph, secondary structure, tertiary structure, etc. Our model not only allows the usage of information from both the high-level interaction graph and the low-level representation graphs, but also offers a baseline for future research opportunities to address the bi-level nature of the data.

Read more
Computational Engineering Finance And Science

BioDynaMo: a general platform for scalable agent-based simulation

Motivation: Agent-based modeling is an indispensable tool for studying complex biological systems. However, existing simulators do not always take full advantage of modern hardware and often have a field-specific software design. Results: We present a novel simulation platform called BioDynaMo that alleviates both of these problems. BioDynaMo features a general-purpose and high-performance simulation engine. We demonstrate that BioDynaMo can be used to simulate use cases in: neuroscience, oncology, and epidemiology. For each use case we validate our findings with experimental data or an analytical solution. Our performance results show that BioDynaMo performs up to three orders of magnitude faster than the state-of-the-art baseline. This improvement makes it feasible to simulate each use case with one billion agents on a single server, showcasing the potential BioDynaMo has for computational biology research. Availability: BioDynaMo is an open-source project under the Apache 2.0 license and is available at this http URL. Instructions to reproduce the results are available in supplementary information. Contact: [email protected] http URL, [email protected], [email protected], [email protected] http URL Supplementary information: Available at this https URL

Read more
Computational Engineering Finance And Science

Blade Envelopes Part I: Concept and Methodology

Blades manufactured through flank and point milling will likely exhibit geometric variability. Gauging the aerodynamic repercussions of such variability, prior to manufacturing a component, is challenging enough, let alone trying to predict what the amplified impact of any in-service degradation will be. While rules of thumb that govern the tolerance band can be devised based on expected boundary layer characteristics at known regions and levels of degradation, it remains a challenge to translate these insights into quantitative bounds for manufacturing. In this work, we tackle this challenge by leveraging ideas from dimension reduction to construct low-dimensional representations of aerodynamic performance metrics. These low-dimensional models can identify a subspace which contains designs that are invariant in performance -- the inactive subspace. By sampling within this subspace, we design techniques for drafting manufacturing tolerances and for quantifying whether a scanned component should be used or scrapped. We introduce the blade envelope as a visual and computational manufacturing guide for a blade. In this paper, the first of two parts, we discuss its underlying concept and detail its computational methodology, assuming one is interested only in the single objective of ensuring that the loss of all manufactured blades remains constant. To demonstrate the utility of our ideas we devise a series of computational experiments with the Von Karman Institute's LS89 turbine blade.

Read more
Computational Engineering Finance And Science

Blade Envelopes Part II: Multiple Objectives and Inverse Design

Blade envelopes offer a set of data-driven tolerance guidelines for manufactured components based on aerodynamic analysis. In Part I of this two-part paper, a workflow for the formulation of blade envelopes is described and demonstrated. In Part II, this workflow is extended to accommodate multiple objectives. This allows engineers to prescribe manufacturing guidelines that take into account multiple performance criteria. The quality of a manufactured blade can be correlated with features derived from the distribution of primal flow quantities over the surface. We show that these distributions can be accounted for in the blade envelope using vector-valued models derived from discrete surface flow measurements. Our methods result in a set of variables that allows flexible and independent control over multiple flow characteristics and performance metrics, similar in spirit to inverse design methods. The augmentations to the blade envelope workflow presented in this paper are demonstrated on the LS89 turbine blade, focusing on the control of loss, mass flow and the isentropic Mach number distribution. Finally, we demonstrate how blade envelopes can be used to visualize invariant designs by producing a 3D render of the envelope using 3D modelling software.

Read more

Ready to get started?

Join us today