Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Valérie Ventura is active.

Publication


Featured researches published by Valérie Ventura.


Neural Computation | 2002

The time-rescaling theorem and its application to neural spike train data analysis

Emery N. Brown; Riccardo Barbieri; Valérie Ventura; Robert E. Kass; Loren M. Frank

Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the models validity prior to using it to make inferences about a particular neural system. Assessing goodness-of-fit is a challenging problem for point process neural spike train models, especially for histogram-based models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The time-rescaling theorem is a well-known result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodness-of-fit tests for both parametric and histogram-based point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the supplementary eye field of a macque monkey and a comparison of temporal and spatial smoothers, inhomogeneous Poisson, inhomogeneous gamma, and inhomogeneous inverse gaussian models of rat hippocampal place cell spiking activity. To help make the logic behind the time-rescaling theorem more accessible to researchers in neuroscience, we present a proof using only elementary probability theory arguments.We also show how the theorem may be used to simulate a general point process model of a spike train. Our paradigm makes it possible to compare parametric and histogram-based neural spike train models directly. These results suggest that the time-rescaling theorem can be a valuable tool for neural spike train data analysis.


Neural Computation | 2001

A Spike-Train Probability Model

Robert E. Kass; Valérie Ventura

Poisson processes usually provide adequate descriptions of the irregularity in neuron spike times after pooling the data across large numbers of trials, as is done in constructing the peristimulus time histogram. When probabilities are needed to describe the behavior of neurons within individual trials, however, Poisson process models are often inadequate. In principle, an explicit formula gives the probability density of a single spike train in great generality, but without additional assumptions, the firing-rate intensity function appearing in that formula cannot be estimated. We propose a simple solution to this problem, which is to assume that the time at which a neuron fires is determined probabilistically by, and only by, two quantities: the experimental clock time and the elapsed time since the previous spike. We show that this model can be fitted with standard methods and software and that it may used successfully to fit neuronal data.


Journal of the American Statistical Association | 2000

Asymptotic Distribution of P Values in Composite Null Models

James M. Robins; Aad van der Vaart; Valérie Ventura

Abstract We investigate the compatibility of a null model H 0 with the data by calculating a p value; that is, the probability, under H 0, that a given test statistic T exceeds its observed value. When the null model consists of a single distribution, the p value is readily obtained, and it has a uniform distribution under H 0. On the other hand, when the null model depends on an unknown nuisance parameter θ one must somehow get rid of θ, (e.g., by estimating it) to calculate a p value. Various proposals have been suggested to “remove” θ, each yielding a different candidate p value. But unlike the simple case, these p values typically are not uniformly distributed under the null model. In this article we investigate their asymptotic distribution under H 0. We show that when the asymptotic mean of the test statistic T depends on θ, the posterior predictive p value of Guttman and Rubin, and the plug-in p value are conservative (i.e., their asymptotic distributions are more concentrated around 1/2 than a uniform), with the posterior predictive p value being the more conservative. In contrast, the partial posterior predictive and conditional predictive p values of Bayarri and Berger are asymptotically uniform. Furthermore, we show that the discrepancy p value of Meng and Gelman and colleagues can be conservative, even when the discrepancy measure has mean 0 under the null model. We also describe ways to modify the conservative p values to make their distributions asymptotically uniform.


Journal of Climate | 2004

Controlling the Proportion of Falsely Rejected Hypotheses when Conducting Multiple Tests with Climatological Data

Valérie Ventura; Christopher J. Paciorek; James S. Risbey

Abstract The analysis of climatological data often involves statistical significance testing at many locations. While the field significance approach determines if a field as a whole is significant, a multiple testing procedure determines which particular tests are significant. Many such procedures are available, most of which control, for every test, the probability of detecting significance that does not really exist. The aim of this paper is to introduce the novel “false discovery rate” approach, which controls the false rejections in a more meaningful way. Specifically, it controls a priori the expected proportion of falsely rejected tests out of all rejected tests; additionally, the test results are more easily interpretable. The paper also investigates the best way to apply a false discovery rate (FDR) approach to spatially correlated data, which are common in climatology. The most straightforward method for controlling the FDR makes an assumption of independence between tests, while other FDR-contr...


Journal of Climate | 2002

Multiple Indices of Northern Hemisphere Cyclone Activity, Winters 1949–99

Christopher J. Paciorek; James S. Risbey; Valérie Ventura; Richard D. Rosen

The National Centers for Environmental Prediction‐National Center for Atmospheric Research (NCEP‐NCAR) reanalysis is used to estimate time trends of, and analyze the relationships among, six indices of cyclone activity or forcing for the winters of 1949‐99, over the region 208‐708N. The indices are Eady growth rate and temperature variance, both at 500 hPa; surface meridional temperature gradient; the 95th percentile of near-surface wind speed; and counts of cyclones and intense cyclones. With multiple indices, one can examine different aspects of storm activity and forcing and assess the robustness of the results to various definitions of a cyclone index. Results are reported both as averages over broad spatial regions and at the resolution of the NCEP‐NCAR reanalysis grid, for which the false discovery rate methodology is used to assess statistical significance. The Eady growth rate, temperature variance, and extreme wind indices are reasonably well correlated over the two major storm track regions of the Northern Hemisphere as well as over northern North America and Eurasia, but weakly correlated elsewhere. These indices show moderately strong correlations with each of the two cyclone count indices over much of the storm tracks when the count indices are offset 7.58 to the north. Regional averages over the Atlantic, the Pacific, and Eurasia show either no long-term change or a decrease in the total number of cyclones; however, all regions show an increase in intense cyclones. The Eady growth rate, temperature variance, and wind indices generally increase in these regions. On a finer spatial scale, these three indices increase significantly over the storm tracks and parts of Eurasia. The intense cyclone count index also increases locally, but insignificantly, over the storm tracks. The wind and intense cyclone indices suggest an increase in impacts from cyclones, primarily over the oceans.


Neural Computation | 2008

Spike train decoding without spike sorting

Valérie Ventura

We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used with any of the current decoding algorithms, such as population vector or likelihood-based algorithms. Based on analytical results and an extensive simulation study, we show that our paradigm is comparable to, and sometimes more efficient than, the traditional approach based on well-isolated neurons and that it remains efficient even when all electrodes are severely corrupted by noise, a situation that would render spike sorting particularly difficult. Our paradigm will also save time and computational effort, both of which are crucially important for successful operation of real-time brain-machine interfaces. Indeed, in place of the lengthy spike-sorting task of the traditional approach, it involves an exact expectation EM algorithm that is fast enough that it could also be left to run during decoding to capture potential slow changes in the states of the neurons.


Journal of Neural Engineering | 2014

To sort or not to sort: the impact of spike-sorting on neural decoding performance.

Sonia Todorova; Patrick T. Sadtler; Aaron P. Batista; Steven M. Chase; Valérie Ventura

OBJECTIVE Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. APPROACH We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. MAIN RESULTS Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. SIGNIFICANCE Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.


Neural Computation | 2006

Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation

Robert E. Kass; Valérie Ventura

It has been observed that spike count correlation between two simultaneously recorded neurons often increases with the length of time interval examined. Under simple assumptions that are roughly consistent with much experimental data, we show that this phenomenon may be explained as being due to excess trial-to-trial variation. The resulting formula for the correlation is able to predict the observed correlation of two neurons recorded from primary visual cortex as a function of interval length.


Neural Computation | 2004

Testing for and estimating latency effects for Poisson and non-Poisson spike trains

Valérie Ventura

Determining the variations in response latency of one or several neurons to a stimulus is of interest in different contexts. Two common problems concern correlating latency with a particular behavior, for example, the reaction time to a stimulus, and adjusting tools for detecting synchronization between two neurons. We use two such problems to illustrate the latency testing and estimation methods developed in this article. Our test for latencies is a formal statistical test that produces a p-value. It is applicable for Poisson and non-Poisson spike trains via use of the bootstrap. Our estimation method is model free, it is fast and easy to implement, and its performance compares favorably to other methods currently available.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Accurately estimating neuronal correlation requires a new spike-sorting paradigm

Valérie Ventura; Richard C. Gerkin

Neurophysiology is increasingly focused on identifying coincident activity among neurons. Strong inferences about neural computation are made from the results of such studies, so it is important that these results be accurate. However, the preliminary step in the analysis of such data, the assignment of spike waveforms to individual neurons (“spike-sorting”), makes a critical assumption which undermines the analysis: that spikes, and hence neurons, are independent. We show that this assumption guarantees that coincident spiking estimates such as correlation coefficients are biased. We also show how to eliminate this bias. Our solution involves sorting spikes jointly, which contrasts with the current practice of sorting spikes independently of other spikes. This new “ensemble sorting” yields unbiased estimates of coincident spiking, and permits more data to be analyzed with confidence, improving the quality and quantity of neurophysiological inferences. These results should be of interest outside the context of neuronal correlations studies. Indeed, simultaneous recording of many neurons has become the rule rather than the exception in experiments, so it is essential to spike sort correctly if we are to make valid inferences about any properties of, and relationships between, neurons.

Collaboration


Dive into the Valérie Ventura's collaboration.

Top Co-Authors

Avatar

Robert E. Kass

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuseppe Vinci

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sagi Perel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sonia Todorova

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Carl R. Olson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emery N. Brown

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge