Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deniz Erdogmus is active.

Publication


Featured researches published by Deniz Erdogmus.


IEEE Transactions on Signal Processing | 2002

An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems

Deniz Erdogmus; Jose C. Principe

The paper investigates error-entropy-minimization in adaptive systems training. We prove the equivalence between minimization of errors Renyi (1970) entropy of order /spl alpha/ and minimization of a Csiszar (1981) distance measure between the densities of desired and system outputs. A nonparametric estimator for Renyis entropy is presented, and it is shown that the global minimum of this estimator is the same as the actual entropy. The performance of the error-entropy-minimization criterion is compared with mean-square-error-minimization in the short-term prediction of a chaotic time series and in nonlinear system identification.


IEEE Transactions on Neural Networks | 2002

Generalized information potential criterion for adaptive system training

Deniz Erdogmus; Jose C. Principe

We have previously proposed the quadratic Renyis error entropy as an alternative cost function for supervised adaptive system training. An entropy criterion instructs the minimization of the average information content of the error signal rather than merely trying to minimize its energy. In this paper, we propose a generalization of the error entropy criterion that enables the use of any order of Renyis entropy and any suitable kernel function in density estimation. It is shown that the proposed entropy estimator preserves the global minimum of actual entropy. The equivalence between global optimization by convolution smoothing and the convolution by the kernel in Parzen windowing is also discussed. Simulation results are presented for time-series prediction and classification where experimental demonstration of all the theoretical concepts is presented.


Journal of Neural Engineering | 2011

Optimizing the P300-based brain–computer interface: current status, limitations and future directions

Joseph N. Mak; Y Arbel; J W Minett; Lynn M. McCane; B Yuksel; D Ryan; David E. Thompson; Luigi Bianchi; Deniz Erdogmus

This paper summarizes the presentations and discussions at a workshop held during the Fourth International BCI Meeting charged with reviewing and evaluating the current state, limitations and future development of P300-based brain-computer interface (P300-BCI) systems. We reviewed such issues as potential users, recording methods, stimulus presentation paradigms, feature extraction and classification algorithms, and applications. A summary of the discussions and the panels recommendations for each of these aspects are presented.


IEEE Signal Processing Letters | 2001

Blind source separation using Renyi's mutual information

Kenneth E. Hild; Deniz Erdogmus; Jose C. Principe

A blind source separation algorithm is proposed that is based on minimizing Renyis mutual information by means of nonparametric probability density function (PDF) estimation. The two-stage process consists of spatial whitening and a series of Givens rotations and produces a cost function consisting only of marginal entropies. This formulation avoids the problems of PDF inaccuracy due to truncation of series expansion and the estimation of joint PDFs in high-dimensional spaces given the typical paucity of data. Simulations illustrate the superior efficiency, in terms of data length, of the proposed method compared to fast independent component analysis (FastICA), Comons (1994) minimum mutual information, and Bell and Sejnowskis (1995) Infomax.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Feature extraction using information-theoretic learning

Kenneth E. Hild; Deniz Erdogmus; Kari Torkkola; Jose C. Principe

A classification system typically consists of both a feature extractor (preprocessor) and a classifier. These two components can be trained either independently or simultaneously. The former option has an implementation advantage since the extractor need only be trained once for use with any classifier, whereas the latter has an advantage since it can be used to minimize classification error directly. Certain criteria, such as minimum classification error, are better suited for simultaneous training, whereas other criteria, such as mutual information, are amenable for training the feature extractor either independently or simultaneously. Herein, an information-theoretic criterion is introduced and is evaluated for training the extractor independently of the classifier. The proposed method uses nonparametric estimation of Renyis entropy to train the extractor by maximizing an approximation of the mutual information between the class labels and the output of the feature extractor. The evaluations show that the proposed method, even though it uses independent training, performs at least as well as three feature extraction methods that train the extractor and classifier simultaneously


IEEE Computer | 2013

The Future of Human-in-the-Loop Cyber-Physical Systems

Gunar Schirner; Deniz Erdogmus; Kaushik R. Chowdhury; Taskin Padir

A prototyping platform and a design framework for rapid exploration of a novel human-in-the-loop application serves as an accelerator for new research into a broad class of systems that augment human interaction with the physical world.


Journal of Neural Engineering | 2006

A comparison of optimal MIMO linear and nonlinear models for brain–machine interfaces

S-P Kim; Justin C. Sanchez; Yadunandana N. Rao; Deniz Erdogmus; Jose M. Carmena; Mikhail A. Lebedev; Miguel A. L. Nicolelis; Jose C. Principe

The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.


IEEE Transactions on Neural Networks | 2004

Feature selection in MLPs and SVMs based on maximum output information

Vikas Sindhwani; Subrata Rakshit; Deniz Erdogmus; Jose C. Principe; Partha Niyogi

This paper presents feature selection algorithms for multilayer perceptrons (MLPs) and multiclass support vector machines (SVMs), using mutual information between class labels and classifier outputs, as an objective function. This objective function involves inexpensive computation of information measures only on discrete variables; provides immunity to prior class probabilities; and brackets the probability of error of the classifier. The maximum output information (MOI) algorithms employ this function for feature subset selection by greedy elimination and directed search. The output of the MOI algorithms is a feature subset of user-defined size and an associated trained classifier (MLP/SVM). These algorithms compare favorably with a number of other methods in terms of performance on various artificial and real-world data sets.


IEEE Signal Processing Letters | 2003

Online entropy manipulation: stochastic information gradient

Deniz Erdogmus; Kenneth E. Hild; Jose C. Principe

Entropy has found significant applications in numerous signal processing problems including independent components analysis and blind deconvolution. In general, entropy estimators require O(N/sup 2/) operations, N being the number of samples. For practical online entropy manipulation, it is desirable to determine a stochastic gradient for entropy, which has O(N) complexity. In this paper, we propose a stochastic Shannons entropy estimator. We determine the corresponding stochastic gradient and investigate its performance. The proposed stochastic gradient for Shannons entropy can be used in online adaptation problems where the optimization of an entropy-based cost function is necessary.


Natural Computing | 2005

Vector quantization using information theoretic concepts

Tue Lehn-Schiøler; Anant Hegde; Deniz Erdogmus; Jose C. Principe

The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen self-organizing map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing elements, hence, the algorithm can be seen as a density matching method.

Collaboration


Dive into the Deniz Erdogmus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Murat Akcakaya

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge