Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sohan Seth is active.

Publication


Featured researches published by Sohan Seth.


international conference of the ieee engineering in medicine and biology society | 2010

Neuronal functional connectivity dynamics in cortex: An MSC-based analysis

Lin Li; Il Park; Sohan Seth; Justin C. Sanchez; Jose C. Principe

The activation of neural ensembles in the cortex is correlated with behavioral states and a change in neuronal functional connectivity patterns is expected. In this paper, we investigate this dynamic nature of functional connectivity in the cortex. Because of the time scale of behavior, a robust method with limited sample size is desirable. In light of this, we utilize mean square contingency (MSC) to measure the pairwise neural dependency to quantify the cortical functional connectivity. Simulation results show that MSC is more robust than cross correlation when the sample size is small. In monkey neural data test, our approach is more effective in detecting the dynamics of functional connectivity associated with the transitions between rest and movement states.


international conference on acoustics, speech, and signal processing | 2008

Compressed signal reconstruction using the correntropy induced metric

Sohan Seth; Jose C. Principe

Recovering a sparse signal from insufficient number of measurements has become a popular area of research under the name of compressed sensing or compressive sampling. The reconstruction algorithm of compressed sensing tries to find the sparsest vector (minimum lo-norm) satisfying a series of linear constraints. However, lo-norm minimization, being a NP hard problem is replaced by li-norm minimization with the cost of higher number of measurements in the sampling process. In this paper we propose to minimize an approximation of lo-norm to reduce the required number of measurements. We use the recently introduced correntropy induced metric (CIM) as an approximation of lo-norm, which is also a novel application of CIM. We show that by reducing the kernel size appropriately we can approximate the lo-norm, theoretically, with arbitary accuracy.


international symposium on neural networks | 2011

Evaluating dependence in spike train metric spaces

Sohan Seth; Austin J. Brockmeier; John S. Choi; Mulugeta Semework; Joseph T. Francis; Jose C. Principe

Assessing dependence between two sets of spike trains or between a set of input stimuli and the corresponding generated spike trains is crucial in many neuroscientific applications, such as in analyzing functional connectivity among neural assemblies, and in neural coding. Dependence between two random variables is traditionally assessed in terms of mutual information. However, although well explored in the context of real or vector valued random variables, estimating mutual information still remains a challenging issue when the random variables exist in more exotic spaces such as the space of spike trains. In the statistical literature, on the other hand, the concept of dependence between two random variables has been presented in many other ways, e.g. using copula, or using measures of association such as Spearmans ρ, and Kendalls τ. Although these methods are usually applied on the real line, their simplicity, both in terms of understanding and estimating, make them worth investigating in the context of spike train dependence. In this paper, we generalize the concept of association to any abstract metric spaces. This new approach is an attractive alternative to mutual information, since it can be easily estimated from realizations without binning or clustering. It also provides an intuitive understanding of what dependence implies in the context of realizations. We show that this new methodology effectively captures dependence between sets of stimuli and spike trains. Moreover, the estimator has desirable small sample characteristic, and it often outperforms an existing similar metric based approach.


international workshop on machine learning for signal processing | 2011

An adaptive decoder from spike trains to micro-stimulation using kernel least-mean-squares (KLMS)

L Li; Il Park; Sohan Seth; John S. Choi; Joseph T. Francis; Justin C. Sanchez; Jose C. Principe

This paper proposes a nonlinear adaptive decoder for somatosensory micro-stimulation based on the kernel least mean square (KLMS) algorithm applied directly on the space of spike trains. Instead of using a binned representation of spike trains, we transform the vector of spike times into a function in reproducing kernel Hilbert space (RKHS), where the inner product of two spike time vectors is defined by a nonlinear cross intensity kernel. This representation encapsulates the statistical description of the point process that generates the spike trains, and bypasses the curse of dimensionality-resolution of the binned spike representations. We compare our method with two other methods based on binned data: GLM and KLMS, in reconstructing biphasic micro-stimulation. The results indicate that the KLMS based on RKHS for spike train is able to detect the timing, the shape and the amplitude of the biphasic stimulation with the best accuracy.


Signal Processing | 2011

A test of independence based on a generalized correlation function

Murali Rao; Sohan Seth; Jian-Wu Xu; Yunmei Chen; Hemant D. Tagare; Jose C. Principe

In this paper, we propose a novel test of independence based on the concept of correntropy. We explore correntropy from a statistical perspective and discuss its properties in the context of testing independence. We introduce the novel concept of parametric correntropy and design a test of independence based on it. We further discuss how the proposed test relaxes the assumption of Gaussianity. Finally, we discuss some computational issues related to the proposed method and compare it with state-of-the-art techniques.


international symposium on neural networks | 2013

Mixture kernel least mean square

Rosha Pokharel; Sohan Seth; Jose C. Principe

Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.


IEEE Signal Processing Magazine | 2013

Kernel Methods on Spike Train Space for Neuroscience: A Tutorial

Il Memming Park; Sohan Seth; António R. C. Paiva; Lin Li; Jose C. Principe

Over the last decade, several positive-definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.


international conference of the ieee engineering in medicine and biology society | 2012

An Association Framework to Analyze Dependence Structure in Time Series

Bilal H. Fadlallah; Austin J. Brockmeier; Sohan Seth; Lin Li; Andreas Keil; Jose C. Principe

The purpose of this paper is two-fold: first, to propose a modification to the generalized measure of association (GMA) framework that reduces the effect of temporal structure in time series; second, to assess the reliability of using association methods to capture dependence between pairs of EEG channels using their time series or envelopes. To achieve the first goal, the GMA algorithm was updated so as to minimize the effect of the correlation inherent in the time structure. The reliability of the modified scheme was then assessed on both synthetic and real data. Synthetic data was generated from a Clayton copula, for which null hypotheses of uncorrelatedness were constructed for the signal. The signal was processed such that the envelope emulated important characteristics of experimental EEG data. Results show that the modified GMA procedure can capture pairwise dependence between generated signals as well as their envelopes with good statistical power. Furthermore, applying GMA and Kendalls tau to quantify dependence using the extracted envelopes of processed EEG data concords with previous findings using the signal itself.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2013

Adaptive Inverse Control of Neural Spatiotemporal Spike Patterns With a Reproducing Kernel Hilbert Space (RKHS) Framework

Lin Li; Il Memming Park; Austin J. Brockmeier; Badong Chen; Sohan Seth; Joseph T. Francis; Justin C. Sanchez; Jose C. Principe

The precise control of spiking in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. We pose neural stimulation as a system control problem where the system input is a multidimensional time-varying signal representing the stimulation, and the output is a set of spike trains; the goal is to drive the output such that the elicited population spiking activity is as close as possible to some desired activity, where closeness is defined by a cost function. If the neural system can be described by a time-invariant (homogeneous) model, then offline procedures can be used to derive the control procedure; however, for arbitrary neural systems this is not tractable. Furthermore, standard control methodologies are not suited to directly operate on spike trains that represent both the target and elicited system response. In this paper, we propose a multiple-input multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a reproducing kernel Hilbert space (RKHS). The control scheme uses an inverse controller to approximate the inverse of the neural circuit. The proposed control system takes advantage of the precise timing of the neural events by using a Schoenberg kernel defined directly in the space of spike trains. The Schoenberg kernel maps the spike train to an RKHS and allows linear algorithm to control the nonlinear neural system without the danger of converging to local minima. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel outperforms the decoding accuracy of other models based on the conventional rate representation of neural signal (i.e., spikernel and generalized linear model). Moreover, after a significant perturbation of the neuron circuit, the control scheme can successfully drive the elicited responses close to the original target responses.


Neural Computation | 2012

Strictly positive-definite spike train kernels for point-process divergences

Il Park; Sohan Seth; Murali Rao; Jose C. Principe

Exploratory tools that are sensitive to arbitrary statistical variations in spike train observations open up the possibility of novel neuroscientific discoveries. Developing such tools, however, is difficult due to the lack of Euclidean structure of the spike train space, and an experimenter usually prefers simpler tools that capture only limited statistical features of the spike train, such as mean spike count or mean firing rate. We explore strictly positive-definite kernels on the space of spike trains to offer both a structural representation of this space and a platform for developing statistical measures that explore features beyond count or rate. We apply these kernels to construct measures of divergence between two point processes and use them for hypothesis testing, that is, to observe if two sets of spike trains originate from the same underlying probability law. Although there exist positive-definite spike train kernels in the literature, we establish that these kernels are not strictly definite and thus do not induce measures of divergence. We discuss the properties of both of these existing nonstrict kernels and the novel strict kernels in terms of their computational complexity, choice of free parameters, and performance on both synthetic and real data through kernel principal component analysis and hypothesis testing.

Collaboration


Dive into the Sohan Seth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Il Park

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Il Memming Park

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Joseph T. Francis

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Li

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge