Featured Researches

Data Analysis Statistics And Probability

FPGA-accelerated machine learning inference as a service for particle physics computing

New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.

Read more
Data Analysis Statistics And Probability

Fast Algorithm for computing a matrix transform used to detect trends in noisy data

A recently discovered universal rank-based matrix method to extract trends from noisy time series is described in [1] but the formula for the output matrix elements, implemented there as an open-access supplement MATLAB computer code, is O( N 4 ) , with N the matrix dimension. This can become prohibitively large for time series with hundreds of sample points or more. Based on recurrence relations, here we derive a much faster O( N 2 ) algorithm and provide code implementations in MATLAB and in open-source JULIA. In some cases one has the output matrix and needs to solve an inverse problem to obtain the input matrix. A fast algorithm and code for this companion problem, also based on the above recurrence relations, are given. Finally, in the narrower, but common, domains of (i) trend detection and (ii) parameter estimation of a linear trend, users require, not the individual matrix elements, but simply their accumulated mean value. For this latter case we provide a yet faster O(N) heuristic approximation that relies on a series of rank one matrices. These algorithms are illustrated on a time series of high energy cosmic rays with N>4× 10 4 . [1] Universal Rank-Order Transform to Extract Signals from Noisy Data, Glenn Ierley and Alex Kostinski, Phys. Rev. X 9 031039 (2019).

Read more
Data Analysis Statistics And Probability

Fast Entropy Estimation for Natural Sequences

It is well known that to estimate the Shannon entropy for symbolic sequences accurately requires a large number of samples. When some aspects of the data are known it is plausible to attempt to use this to more efficiently compute entropy. A number of methods having various assumptions have been proposed which can be used to calculate entropy for small sample sizes. In this paper, we examine this problem and propose a method for estimating the Shannon entropy for a set of ranked symbolic natural events. Using a modified Zipf-Mandelbrot-Li law and a new rank-based coincidence counting method, we propose an efficient algorithm which enables the entropy to be estimated with surprising accuracy using only a small number of samples. The algorithm is tested on some natural sequences and shown to yield accurate results with very small amounts of data.

Read more
Data Analysis Statistics And Probability

Fast Time Series Detrending with Applications to Heart Rate Variability Analysis

Here we discuss a new fast detrending method for the non-stationary RR time series used in Heart Rate Variability analysis. The described method is based on the diffusion equation, and we show numerically that it is equivalent to the widely used Smoothing Priors Approach (SPA) and Wavelet Smoothing Approach (WSA) methods. The speed of the proposed method is comparable to the WSA method and it is several orders of magnitude faster than the SPA method, which makes it suitable for very long time series analysis.

Read more
Data Analysis Statistics And Probability

Fast and interpretable classification of small X-ray diffraction datasets using data augmentation and deep neural networks

X-ray diffraction (XRD) data acquisition and analysis is among the most time-consuming steps in the development cycle of novel thin-film materials. We propose a machine-learning-enabled approach to predict crystallographic dimensionality and space group from a limited number of thin-film XRD patterns. We overcome the scarce-data problem intrinsic to novel materials development by coupling a supervised machine learning approach with a model agnostic, physics-informed data augmentation strategy using simulated data from the Inorganic Crystal Structure Database (ICSD) and experimental data. As a test case, 115 thin-film metal halides spanning 3 dimensionalities and 7 space-groups are synthesized and classified. After testing various algorithms, we develop and implement an all convolutional neural network, with cross validated accuracies for dimensionality and space-group classification of 93% and 89%, respectively. We propose average class activation maps, computed from a global average pooling layer, to allow high model interpretability by human experimentalists, elucidating the root causes of misclassification. Finally, we systematically evaluate the maximum XRD pattern step size (data acquisition rate) before loss of predictive accuracy occurs, and determine it to be 0.16°, which enables an XRD pattern to be obtained and classified in 5.5 minutes or less.

Read more
Data Analysis Statistics And Probability

Fast and scalable likelihood maximization for Exponential Random Graph Models

Exponential Random Graph Models (ERGMs) have gained increasing popularity over the years. Rooted into statistical physics, the ERGMs framework has been successfully employed for reconstructing networks, detecting statistically significant patterns in graphs, counting networked configurations with given properties. From a technical point of view, the ERGMs workflow is defined by two subsequent optimization steps: the first one concerns the maximization of Shannon entropy and leads to identify the functional form of the ensemble probability distribution that is maximally non-committal with respect to the missing information; the second one concerns the maximization of the likelihood function induced by this probability distribution and leads to its numerical determination. This second step translates into the resolution of a system of O(N) non-linear, coupled equations (with N being the total number of nodes of the network under analysis), a problem that is affected by three main issues, i.e. accuracy, speed and scalability. The present paper aims at addressing these problems by comparing the performance of three algorithms (i.e. Newton's method, a quasi-Newton method and a recently-proposed fixed-point recipe) in solving several ERGMs, defined by binary and weighted constraints in both a directed and an undirected fashion. While Newton's method performs best for relatively little networks, the fixed-point recipe is to be preferred when large configurations are considered, as it ensures convergence to the solution within seconds for networks with hundreds of thousands of nodes (e.g. the Internet, Bitcoin). We attach to the paper a Python code implementing the three aforementioned algorithms on all the ERGMs considered in the present work.

Read more
Data Analysis Statistics And Probability

Field Formulation of Parzen Data Analysis

The Parzen window density is a well-known technique, associating Gaussian kernels with data points. It is a very useful tool in data exploration, with particular importance for clustering schemes and image analysis. This method is presented here within a formalism containing scalar fields, such as the density function and its potential, and their corresponding gradients. The potential is derived from the density through the dependence of the latter on the common scale parameter of all Gaussian kernels. The loci of extrema of the density and potential scalar fields are points of interest which obey a variation condition on a novel indicator function. They serve as focal points of clustering methods depending on maximization of the density, or minimization of the potential, accordingly. The mixed inter-dependencies of the different fields in d-dim data-space and 1-d scale-space, are discussed. They lead to a Schrődinger equation in d-dim, and to a diffusion equation in (d+1)-dim

Read more
Data Analysis Statistics And Probability

Field dynamics inference for local and causal interactions

Inference of fields defined in space and time from observational data is a core discipline in many scientific areas. This work approaches the problem in a Bayesian framework. The proposed method is based on statistically homogeneous random fields defined in space and time and demonstrates how to reconstruct the field together with its prior correlation structure from data. The prior model of the correlation structure is described in a non-parametric fashion and solely builds on fundamental physical assumptions such as space-time homogeneity, locality, and causality. These assumptions are sufficient to successfully infer the field and its prior correlation structure from noisy and incomplete data of a single realization of the process as demonstrated via multiple numerical examples.

Read more
Data Analysis Statistics And Probability

Finding the origin of noise transients in LIGO data with machine learning

Quality improvement of interferometric data collected by gravitational-wave detectors such as Advanced LIGO and Virgo is mission critical for the success of gravitational-wave astrophysics. Gravitational-wave detectors are sensitive to a variety of disturbances of non-astrophysical origin with characteristic frequencies in the instrument band of sensitivity. Removing non-astrophysical artifacts that corrupt the data stream is crucial for increasing the number and statistical significance of gravitational-wave detections and enabling refined astrophysical interpretations of the data. Machine learning has proved to be a powerful tool for analysis of massive quantities of complex data in astronomy and related fields of study. We present two machine learning methods, based on random forest and genetic programming algorithms, that can be used to determine the origin of non-astrophysical transients in the LIGO detectors. We use two classes of transients with known instrumental origin that were identified during the first observing run of Advanced LIGO to show that the algorithms can successfully identify the origin of non-astrophysical transients in real interferometric data and thus assist in the mitigation of instrumental and environmental disturbances in gravitational-wave searches. While the data sets described in this paper are specific to LIGO, and the exact procedures employed were unique to the same, the random forest and genetic programming code bases and means by which they were applied as a dual machine learning approach are completely portable to any number of instruments in which noise is believed to be generated through mechanical couplings, the source of which is not yet discovered.

Read more
Data Analysis Statistics And Probability

Fitting a function to time-dependent ensemble averaged data

Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

Read more

Ready to get started?

Join us today