Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew T. Harrison is active.

Publication


Featured researches published by Matthew T. Harrison.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Measurements of methane emissions at natural gas production sites in the United States

David T. Allen; Vincent M. Torres; James Thomas; David W. Sullivan; Matthew T. Harrison; Al Hendler; Scott C. Herndon; Charles E. Kolb; Matthew P. Fraser; A. Daniel Hill; Brian K. Lamb; Jennifer Lynne Miskimins; Robert F. Sawyer; John H. Seinfeld

Significance This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States. The measurements indicate that well completion emissions are lower than previously estimated; the data also show emissions from pneumatic controllers and equipment leaks are higher than Environmental Protection Agency (EPA) national emission projections. Estimates of total emissions are similar to the most recent EPA national inventory of methane emissions from natural gas production. These measurements will help inform policymakers, researchers, and industry, providing information about some of the sources of methane emissions from the production of natural gas, and will better inform and advance national and international scientific and policy discussions with respect to natural gas development and use. Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67–3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ±200 Gg). The estimate for comparable source categories in the EPA national inventory is ∼1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production).


Journal of Neurophysiology | 2012

Conditional modeling and the jitter method of spike resampling

Asohan Amarasingham; Matthew T. Harrison; Nicholas G. Hatsopoulos; Stuart Geman

The existence and role of fine-temporal structure in the spiking activity of central neurons is the subject of an enduring debate among physiologists. To a large extent, the problem is a statistical one: what inferences can be drawn from neurons monitored in the absence of full control over their presynaptic environments? In principle, properly crafted resampling methods can still produce statistically correct hypothesis tests. We focus on the approach to resampling known as jitter. We review a wide range of jitter techniques, illustrated by both simulation experiments and selected analyses of spike data from motor cortical neurons. We rely on an intuitive and rigorous statistical framework known as conditional modeling to reveal otherwise hidden assumptions and to support precise conclusions. Among other applications, we review statistical tests for exploring any proposed limit on the rate of change of spiking probabilities, exact tests for the significance of repeated fine-temporal patterns of spikes, and the construction of acceptance bands for testing any purported relationship between sensory or motor variables and synchrony or other fine-temporal events.


The Journal of Neuroscience | 2006

Spike Count Reliability and the Poisson Hypothesis

Asohan Amarasingham; Ting-Li Chen; Stuart Geman; Matthew T. Harrison; David L. Sheinberg

The variability of cortical activity in response to repeated presentations of a stimulus has been an area of controversy in the ongoing debate regarding the evidence for fine temporal structure in nervous system activity. We present a new statistical technique for assessing the significance of observed variability in the neural spike counts with respect to a minimal Poisson hypothesis, which avoids the conventional but troubling assumption that the spiking process is identically distributed across trials. We apply the method to recordings of inferotemporal cortical neurons of primates presented with complex visual stimuli. On this data, the minimal Poisson hypothesis is rejected: the neuronal responses are too reliable to be fit by a typical firing-rate model, even allowing for sudden, time-varying, and trial-dependent rate changes after stimulus onset. The statistical evidence favors a tightly regulated stimulus response in these neurons, close to stimulus onset, although not further away.


Neural Computation | 2009

A rate and history-preserving resampling algorithm for neural spike trains

Matthew T. Harrison; Stuart Geman

Resampling methods are popular tools for exploring the statistical structure of neural spike trains. In many applications, it is desirable to have resamples that preserve certain non-Poisson properties, like refractory periods and bursting, and that are also robust to trial-to-trial variability. Pattern jitter is a resampling technique that accomplishes this by preserving the recent spiking history of all spikes and constraining resampled spikes to remain close to their original positions. The resampled spike times are maximally random up to these constraints. Dynamic programming is used to create an efficient resampling algorithm.


Journal of the American Statistical Association | 2018

Mixture Models With a Prior on the Number of Components

Jeffrey W. Miller; Matthew T. Harrison

ABSTRACT A natural Bayesian approach for mixture models with an unknown number of components is to take the usual finite mixture model with symmetric Dirichlet weights, and put a prior on the number of components—that is, to use a mixture of finite mixtures (MFM). The most commonly used method of inference for MFMs is reversible jump Markov chain Monte Carlo, but it can be nontrivial to design good reversible jump moves, especially in high-dimensional spaces. Meanwhile, there are samplers for Dirichlet process mixture (DPM) models that are relatively simple and are easily adapted to new applications. It turns out that, in fact, many of the essential properties of DPMs are also exhibited by MFMs—an exchangeable partition distribution, restaurant process, random measure representation, and stick-breaking representation—and crucially, the MFM analogues are simple enough that they can be used much like the corresponding DPM properties. Consequently, many of the powerful methods developed for inference in DPMs can be directly applied to MFMs as well; this simplifies the implementation of MFMs and can substantially improve mixing. We illustrate with real and simulated data, including high-dimensional gene expression data used to discriminate cancer subtypes. Supplementary materials for this article are available online.


Annals of Statistics | 2013

Exact sampling and counting for fixed-margin matrices

Jeffrey W. Miller; Matthew T. Harrison

The uniform distribution on matrices with specified row and column sums is often a natural choice of null model when testing for structure in two-way tables (binary or nonnegative integer). Due to the difficulty of sampling from this distribution, many approximate methods have been developed. We will show that by exploiting certain symmetries, exact sampling and counting is in fact possible in many nontrivial real-world cases. We illustrate with real datasets including ecological co-occurrence matrices and contingency tables.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Ambiguity and nonidentifiability in the statistical analysis of neural codes.

Asohan Amarasingham; Stuart Geman; Matthew T. Harrison

Significance Among the most important open questions in neurophysiology are those regarding the nature of the code that neurons use to transmit information. Experimental approaches to such questions are challenging because the spike outputs of a neuronal subpopulation are influenced by a vast array of factors, ranging from microscopic to macroscopic scales, but only a small fraction of these is measured. Inevitably, there is variability from trial to trial in the recorded data. We show that a prominent conceptual approach to modeling spike-train variability can be ill-posed, confusing the interpretation of results bearing on neural codes. We argue for more careful definitions and more explicit statements of physiological assumptions. Many experimental studies of neural coding rely on a statistical interpretation of the theoretical notion of the rate at which a neuron fires spikes. For example, neuroscientists often ask, “Does a population of neurons exhibit more synchronous spiking than one would expect from the covariability of their instantaneous firing rates?” For another example, “How much of a neuron’s observed spiking variability is caused by the variability of its instantaneous firing rate, and how much is caused by spike timing variability?” However, a neuron’s theoretical firing rate is not necessarily well-defined. Consequently, neuroscientific questions involving the theoretical firing rate do not have a meaning in isolation but can only be interpreted in light of additional statistical modeling choices. Ignoring this ambiguity can lead to inconsistent reasoning or wayward conclusions. We illustrate these issues with examples drawn from the neural-coding literature.


Progress in Brain Research | 2001

Representations based on neuronal interactions in motor cortex.

Nicholas G. Hatsopoulos; Matthew T. Harrison; John P. Donoghue

Publisher Summary This chapter describes that large groups of cortical neurons are simultaneously active when a stimulus is perceived or a motor act is planned and executed. All units that are recorded, using a immovable, chronically implanted array in motor cortex, modulated their activity during execution of a simple reaching movement attests to the idea that a very large number of neurons participate in the simplest of behaviors. Population decoding strategies, such as the population vector algorithm assume that neurons are noisy but independent encoders. These types of decoding algorithms demonstrates that pooling the noisy signals from many neurons can reduce the detrimental effects of noise and can predict the direction of movement quite reliably. By recording from multiple neurons simultaneously, neurons are not independent encoders of movement direction. They exhibit statistical dependencies on both time scale. By incorporating these correlations into statistical coding and decoding schemes, it shows that they are not necessarily detrimental to decoding movement direction but can improve predictive power.


Neural Computation | 2015

Spatiotemporal conditional inference and hypothesis tests for neural ensemble spiking precision

Matthew T. Harrison; Asohan Amarasingham; Wilson Truccolo

The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatiotemporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatiotemporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis-testing adjustments and design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peristimulus time histogram or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable to other areas of neurostatistical analysis.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2014

Adaptive Offset Correction for Intracortical Brain–Computer Interfaces

Mark L. Homer; János A Perge; Michael J. Black; Matthew T. Harrison; Sydney S. Cash; Leigh R. Hochberg

Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a users ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p <; 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

Collaboration


Dive into the Matthew T. Harrison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioannis Kontoyiannis

Athens University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David T. Allen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

David W. Sullivan

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge