Featured Researches

Data Analysis Statistics And Probability

Ensemble learning and iterative training (ELIT) machine learning: applications towards uncertainty quantification and automated experiment in atom-resolved microscopy

Deep learning has emerged as a technique of choice for rapid feature extraction across imaging disciplines, allowing rapid conversion of the data streams to spatial or spatiotemporal arrays of features of interest. However, applications of deep learning in experimental domains are often limited by the out-of-distribution drift between the experiments, where the network trained for one set of imaging conditions becomes sub-optimal for different ones. This limitation is particularly stringent in the quest to have an automated experiment setting, where retraining or transfer learning becomes impractical due to the need for human intervention and associated latencies. Here we explore the reproducibility of deep learning for feature extraction in atom-resolved electron microscopy and introduce workflows based on ensemble learning and iterative training to greatly improve feature detection. This approach both allows incorporating uncertainty quantification into the deep learning analysis and also enables rapid automated experimental workflows where retraining of the network to compensate for out-of-distribution drift due to subtle change in imaging conditions is substituted for a human operator or programmatic selection of networks from the ensemble. This methodology can be further applied to machine learning workflows in other imaging areas including optical and chemical imaging.

Read more
Data Analysis Statistics And Probability

Equivalent Circuit Model Recognition of Electrochemical Impedance Spectroscopy via Machine Learning

Electrochemical impedance spectroscopy (EIS) is an effective method for studying the electrochemical systems. The interpretation of EIS is the biggest challenge in this technology, which requires reasonable modeling. However, the modeling of EIS is of great subjectivity, meaning that there may be several models to fit the same set of data. In order to overcome the uncertainty and triviality of human analysis, this research uses machine learning to carry out EIS pattern recognition. Raw EIS data and their equivalent circuit models were collected from the literature, and the support vector machine (SVM) was used to analyze these data. As the result, we addresses the classification of EIS and recognizing their equivalent circuit models with accuracies of up to 78%. This study demonstrates the great potential of machine learning in electrochemical researches.

Read more
Data Analysis Statistics And Probability

Error estimation in the method of quasi-optimal weights

We examine the problem of construction of confidence intervals within the basic single-parameter, single-iteration variation of the method of quasi-optimal weights. Two kinds of distortions of such intervals due to insufficiently large samples are examined, both allowing an analytical investigation. First, a criterion is developed for validity of the assumption of asymptotic normality together with a recipe for the corresponding corrections. Second, a method is derived to take into account the systematic shift of the confidence interval due to the non-linearity of the theoretical mean of the weight as a function of the parameter to be estimated. A numerical example illustrates the two corrections.

Read more
Data Analysis Statistics And Probability

Establishing a common data base of ice experiments and using machine learning to understand and predict ice behavior

Machine learning and statistical tools are applied to identify how parameters, such as temperature, influence peak stress and ice behavior. To enable the analysis, a common and small scale experimental data base is established.

Read more
Data Analysis Statistics And Probability

Estimating Experimental Dispersion Curves from Steady-State Frequency Response Measurements

Dispersion curves characterize the frequency dependence of the phase and the group velocities of propagating elastic waves. Many analytical and numerical techniques produce dispersion curves from physics-based models. However, it is often challenging to accurately model engineering structures with intricate geometric features and inhomogeneous material properties. For such cases, this paper proposes a novel method to estimate group velocities from experimental data-driven models. Experimental frequency response functions (FRFs) are used to develop data-driven models, {which are then used to estimate dispersion curves}. The advantages of this approach over other traditionally used transient techniques stem from the need to conduct only steady-state experiments. In comparison, transient experiments often need a higher-sampling rate for wave-propagation applications and are more susceptible to noise. The vector-fitting (VF) algorithm is adopted to develop data-driven models from experimental in-plane and out-of-plane FRFs of a one-dimensional structure. The quality of the corresponding data-driven estimates is evaluated using an analytical Timoshenko beam as a baseline. The data-driven model (using the out-of-plane FRFs) estimates the anti-symmetric ( A 0 ) group velocity with a maximum error of 4% over a 40~kHz frequency band. In contrast, group velocities estimated from transient experiments resulted in a maximum error of 6% over the same frequency band.

Read more
Data Analysis Statistics And Probability

Estimating physical properties from liquid crystal textures via machine learning and complexity-entropy methods

Imaging techniques are essential tools for inquiring a number of properties from different materials. Liquid crystals are often investigated via optical and image processing methods. In spite of that, considerably less attention has been paid to the problem of extracting physical properties of liquid crystals directly from textures images of these materials. Here we present an approach that combines two physics-inspired image quantifiers (permutation entropy and statistical complexity) with machine learning techniques for extracting physical properties of nematic and cholesteric liquid crystals directly from their textures images. We demonstrate the usefulness and accuracy of our approach in a series of applications involving simulated and experimental textures, in which physical properties of these materials (namely: average order parameter, sample temperature, and cholesteric pitch length) are predicted with significant precision. Finally, we believe our approach can be useful in more complex liquid crystal experiments as well as for probing physical properties of other materials that are investigated via imaging techniques.

Read more
Data Analysis Statistics And Probability

Estimating quantities conserved by virtue of scale invariance in timeseries

In contrast to the symmetries of translation in space, rotation in space, and translation in time, the known laws of physics are not universally invariant under transformation of scale. However, the action can be invariant under change of scale in the special case of a scale free dynamical system that can be described in terms of a Lagrangian, that itself scales inversely with time. Crucially, this means symmetries under change of scale can exist in dynamical systems under certain constraints. Our contribution lies in the derivation of a generalised scale invariant Lagrangian - in the form of a power series expansion - that satisfies these constraints. This generalised Lagrangian furnishes a normal form for dynamic causal models (i.e., state space models based upon differential equations) that can be used to distinguish scale invariance (scale symmetry) from scale freeness in empirical data. We establish face validity with an analysis of simulated data and then show how scale invariance can be identified - and how the associated conserved quantities can be estimated - in neuronal timeseries.

Read more
Data Analysis Statistics And Probability

Estimating the Mutual Information between two Discrete, Asymmetric Variables with Limited Samples

Determining the strength of non-linear statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables---the one with minimal entropy---is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain an estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between two discrete variables, without seeking to model the specific way in which the variables are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states (typically few) in which the large-entropy variable registers coincidences.

Read more
Data Analysis Statistics And Probability

Estimation of roughness measurement bias originating from background subtraction

When measuring the roughness of rough surfaces, the limited sizes of scanned areas lead to its systematic underestimation. Levelling by polynomials and other filtering used in real-world processing of atomic force microscopy data increases this bias considerably. Here a framework is developed providing explicit expressions for the bias of squared mean square roughness in the case of levelling by fitting a model background function using linear least squares. The framework is then applied to polynomial levelling, for both one-dimensional and two-dimensional data processing, and basic models of surface autocorrelation function, Gaussian and exponential. Several other common scenarios are covered as well, including median levelling, intermediate Gaussian--exponential autocorrelation model and frequency space filtering. Application of the results to other quantities, such as Rq, Sq, Ra and~Sa is discussed. The results are summarized in overview plots covering a range of autocorrelation functions and polynomial degrees, which allow graphical estimation of the bias.

Read more
Data Analysis Statistics And Probability

Estimation of the Randomness of Continuous and Discrete Signals Using the Disentropy of the Autocorrelation

The amount of randomness in a signal generated by physical or non-physical process can reveal important information about that process. For example, the presence of randomness in ECG signals may indicate a cardiac disease. On the hand, the lack of randomness in a speech signal may indicate the speaker is a machine. Hence, to quantify the amount of randomness in a signal is an important task in many different areas. In this direction, the present work proposes to use the disentropy of the autocorrelation function as a measure of randomness. Examples using noisy and chaotic signals are shown.

Read more

Ready to get started?

Join us today