Evan Kriminger
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evan Kriminger.
Neural Computation | 2014
Austin J. Brockmeier; John S. Choi; Evan Kriminger; Joseph T. Francis; Jose C. Principe
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus—exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
international symposium on power line communications and its applications | 2011
Evan Kriminger; Haniph A. Latchman
This paper analyzes the optimum constant contention window (CW) for the HomePlug 1.0 and AV CSMA/CA MAC. A discrete time, homogenous Markov chain, with the state specified by both the backoff counter (BC) and deferral counter (DC), is used to model a single node contending for transmission. The structure of the Markov chain admits a generalized expression for the stationary state probability mass function (pmf) associated with each state. The recursively defined state pmfs can be analytically reduced to a single expression relating the probability p, of the node finding the medium idle, the maximum window size W, the maximum deferral counter size, and the number of nodes n. Optimizing the MAC efficiency provides a target value for p, which can be attained with the proper selection of W and. It is shown that an optimal contention window size can be chosen based on a linear relationship with the number of nodes.
international symposium on neural networks | 2012
Evan Kriminger; Jose C. Principe; Choudur Lakshminarayan
The class imbalance problem is pervasive in machine learning. To accurately classify the minority class, current methods rely on sampling schemes to close the gap between classes, or on the application of error costs to create algorithms which favor the minority class. Since the sampling schemes and costs must be specified, these methods are highly dependent on the class distributions present in the training set. This makes them difficult to apply in settings where the level of imbalance changes, such as in online streaming data. Often they cannot handle multi-class problems. We present a novel single-class algorithm called Class Conditional Nearest Neighbor Distribution (CCNND), which mitigates the effects of class imbalance through local geometric structure in the data. Our algorithm can be applied seamlessly to problems with any level of imbalance or number of classes, and new examples are simply added to the training set. We show that it performs as well as or better than top sampling and cost-weighting methods on four imbalanced datasets from the UCI Machine Learning Repository, and then apply it to streaming data from the oil and gas industry alongside a modified nearest neighbor algorithm. Our algorithms competitive performance relative to the state-of-the-art, coupled with its extremely simple implementation and automatic adjustment for minority classes, demonstrates that it is worth further study.
IEEE Transactions on Computational Intelligence and Ai in Games | 2016
Matthew Emigh; Evan Kriminger; Austin J. Brockmeier; Jose C. Principe; Panos M. Pardalos
Reinforcement learning (RL) has had mixed success when applied to games. Large state spaces and the curse of dimensionality have limited the ability for RL techniques to learn to play complex games in a reasonable length of time. We discuss a modification of Q-learning to use nearest neighbor states to exploit previous experience in the early stages of learning. A weighting on the state features is learned using metric learning techniques, such that neighboring states represent similar game situations. Our method is tested on the arcade game Frogger, and it is shown that some of the effects of the curse of dimensionality can be mitigated.
IEEE Journal of Oceanic Engineering | 2015
Evan Kriminger; J. Tory Cobb; Jose C. Principe
Automatic target recognition in sidescan sonar imagery is vital to many applications, particularly sea mine detection and classification. We expand upon the traditional offline supervised classification approach with an active learning method to automatically label new objects that are not present in the training set. This is facilitated by the option of sending difficult samples to an outlier bin, from which models can be built for new objects. The decisions of the classifier are improved by a novel active learning approach, called model trees (MT), which builds an ensemble of hypotheses about the classification decisions that grows proportionally to the amount of uncertainty the system has about the samples. Our system outperforms standard active learning methods, and is shown to correctly identify new objects much more accurately than a pure clustering approach, on a simulated sidescan sonar data set.
international ieee/embs conference on neural engineering | 2011
Austin J. Brockmeier; Evan Kriminger; Justin C. Sanchez; Jose C. Principe
Visualizing the collective modulation of multiple neurons during a known behavioral task is useful for exploratory analysis, but handling the large dimensionality of neural recordings is challenging. We further investigate using static dimensionality reduction techniques on neural firing rate data during an arm movement task. This lower-dimensional representation of the data is able to capture the neural states corresponding to different portions of the behavior task. A simulation using a dynamical model lends credence to the ability of the technique to generate a representation that preserves underlying dynamics of the model. This technique is a straightforward way to extract a useful visualization for neural recordings during brain-machine interface tasks. Meaningful visualization confirms underlying structure in data, which can be captured with parametric modeling.
international workshop on machine learning for signal processing | 2015
Matthew Emigh; Evan Kriminger; Jose C. Principe
In reinforcement learning, exploration is typically conducted by taking occasional random actions. The literature lacks an exploration method driven by uncertainty, in which exploratory actions explicitly seek to improve the learning process in a sequential decision problem. In this paper, we propose a framework called Divergence-to-Go, which is a model-based method that uses recursion similarly to dynamic programming to quantify the uncertainty associated with each state-action pair. Information-theoretic estimators of uncertainty allow our method to function even in large, continuous spaces. The performance is demonstrated on a maze and mountain car task.
international symposium on neural networks | 2015
Matthew Emigh; Evan Kriminger; Jose C. Principe
Linear discriminant analysis seeks to find a one-dimensional projection of a dataset to alleviate the problems associated with classifying high-dimensional data. The earliest methods, based on second-order statistics often fail on multimodal datasets. Information-theoretic criteria do not suffer in such cases, and allow for projections to spaces higher than one dimension and with multiple classes. These approaches are based on maximizing mutual information between the projected data and the labels. However, mutual information is computationally demanding and vulnerable to datasets with class imbalance. In this paper we propose an information-theoretic criterion for learning discriminants based on the Euclidean distance divergence between classes. This objective more directly seeks projections which separate classes and performs well in the midst of class imbalance. We demonstrate the effectiveness on real datasets, and provide extensions to the multi-class and multi-dimension cases.
international conference on acoustics, speech, and signal processing | 2011
Evan Kriminger; Jose C. Principe; Choudur Lakshminarayan
Many practical data streams are typically composed of several states known as regimes. In this paper, we invoke phase space reconstruction methods from non-linear time series and dynamical systems for regime detection. But the data collected from sensors is normally noisy, does not have constant amplitude and is sometimes plagued by shifts in the mean. All these aspects make modeling even more difficult. We propose a representation of the time series in the phase space with a modified embedding, which is invariant to translation and scale. The features we use for regime detection are based on comparing trajectory segments in the modified embedding space with cross-correntropy, which is a generalized correlation function. We apply our algorithm to non-linear oscillations, and compare its performance with the standard time delay embedding.
Archive | 2011
Choudur Lakshminarayan; Alexander Singh Alvarado; Jose C. Principe; Evan Kriminger