Theodoros Tsiligkaridis
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Theodoros Tsiligkaridis.
IEEE Transactions on Signal Processing | 2013
Theodoros Tsiligkaridis; Alfred O. Hero
This paper presents a new method for estimating high dimensional covariance matrices. The method, permuted rank-penalized least-squares (PRLS), is based on a Kronecker product series expansion of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. For covariance matrices of low separation rank, our results establish that PRLS has significantly faster convergence than the standard sample covariance matrix (SCM) estimator. The convergence rate captures a fundamental tradeoff between estimation error and approximation error, thus providing a scalable covariance estimation framework in terms of separation rank, similar to low rank approximation of covariance matrices . The MSE convergence rates generalize the high dimensional rates recently obtained for the ML Flip-flop algorithm , for Kronecker product covariance estimation. We show that a class of block Toeplitz covariance matrices is approximatable by low separation rank and give bounds on the minimal separation rank r that ensures a given level of bias. Simulations are presented to validate the theoretical bounds. As a real world application, we illustrate the utility of the proposed Kronecker covariance estimator for spatio-temporal linear least squares prediction of multivariate wind speed measurements.
IEEE Transactions on Information Theory | 2014
Theodoros Tsiligkaridis; Brian M. Sadler; Alfred O. Hero
We consider the problem of 20 questions with noise for multiple players under the minimum entropy criterion in the setting of stochastic search, with application to target localization. Each player yields a noisy response to a binary query governed by a certain error probability. First, we propose a sequential policy for constructing questions that queries each player in sequence and refines the posterior of the target location. Second, we consider a joint policy that asks all players questions in parallel at each time instant and characterize the structure of the optimal policy for constructing the sequence of questions. This generalizes the single player probabilistic bisection method for stochastic search problems. Third, we prove an equivalence between the two schemes showing that, despite the fact that the sequential scheme has access to a more refined filtration, the joint scheme performs just as well on average. Fourth, we establish convergence rates of the mean-square error and derive error exponents. Finally, we obtain an extension to the case of unknown error probabilities. This framework provides a mathematical model for incorporating a human in the loop for active machine learning systems.
ieee international workshop on computational advances in multi sensor adaptive processing | 2013
Kristjan H. Greenewald; Theodoros Tsiligkaridis; Alfred O. Hero
In this paper we consider the use of the space vs. time Kronecker product decomposition in the estimation of covariance matrices for spatio-temporal data. This decomposition imposes lower dimensional structure on the estimated covariance matrix, thus reducing the number of samples required for estimation. To allow a smooth tradeoff between the reduction in the number of parameters (to reduce estimation variance) and the accuracy of the covariance approximation (affecting estimation bias), we introduce a diagonally loaded modification of the sum-of-kronecker products representation in [1].We derive an asymptotic Cramér-Rao bound (CRB) on the minimum attainable mean squared predictor coefficient estimation error for unbiased estimators of Kronecker structured covariance matrices. We illustrate the accuracy of the diagonally loaded Kronecker sum decomposition by applying it to the prediction of human activity video.
IEEE Transactions on Signal Processing | 2015
Theodoros Tsiligkaridis; Brian M. Sadler; Alfred O. Hero
We consider the problem of decentralized 20 questions with noise for multiple players/agents under the minimum entropy criterion in the setting of stochastic search over a parameter space, with application to target localization. We propose decentralized extensions of the active query-based stochastic search strategy that combines elements from the 20 questions approach and social learning. We prove convergence to correct consensus on the value of the parameter. This framework provides a flexible and tractable mathematical model for decentralized parameter estimation systems based on active querying. We illustrate the effectiveness and robustness of the proposed decentralized collaborative 20 questions algorithm for random network topologies with information sharing.
international conference on acoustics, speech, and signal processing | 2013
Theodoros Tsiligkaridis; Brian M. Sadler; Alfred O. Hero
We consider the problem of 20 questions with noise for collaborative players under the minimum entropy criterion [1] in the setting of stochastic search, with application to target localization. First, assuming conditionally independent collaborators, we characterize the structure of the optimal policy for constructing the sequence of questions. This generalizes the single player probabilistic bisection method [1, 2] for stochastic search problems. Second, we prove a separation theorem showing that optimal joint queries achieve the same performance as a greedy sequential scheme. Third, we establish convergence rates of the mean-square error (MSE). Fourth, we derive upper bounds on the MSE of the sequential scheme. This framework provides a mathematical model for incorporating a human in the loop for active machine learning systems.
international conference on acoustics, speech, and signal processing | 2012
Theodoros Tsiligkaridis; Alfred O. Hero
We introduce a sparse covariance estimation method for the high dimensional setting when the covariance matrix decomposes as a Kronecker product, i.e., Σ0 = A0 ⊗ B0, and the observations are Gaussian. We propose an ℓ1 penalized maximum-likelihood approach to solve this problem. The dual formulation motivates an iterative algorithm (penalized flip-flop; FFP) based on a block coordinate-descent approach. Although the ℓ1-penalized log-likelihood function (objective function) is non-convex in general and non-smooth, we show that FFP converges to a local maximum under relatively mild assumptions. For the fixed dimension case, large-sample statistical consistency is proved and a rate of convergence bound is derived. Simulations show that FFP outperforms its non-penalized counterpart and the naive Glasso algorithm for sparse Kronecker-decomposable covariance matrix.
ieee signal processing workshop on statistical signal processing | 2012
Theodoros Tsiligkaridis; Alfred O. Hero; Shuheng Zhou
We consider high-dimensional estimation of a (possibly sparse) Kronecker-decomposable covariance matrix given i.i.d. Gaussian samples. We propose a sparse covariance estimation algorithm, Kronecker Graphical Lasso (KGlasso), for the high dimensional setting that takes advantage of structure and sparsity. Convergence and limit point characterization of this iterative algorithm is established. Compared to standard Glasso, KGlasso has low computational complexity as the dimension of the covariance matrix increases. We derive a tight MSE convergence rate for KGlasso and show it strictly outperforms standard Glasso and FF. Simulations validate these results and shows that KGlasso outperforms the maximum-likelihood solution (FF), in the high-dimensional small-sample regime.
international symposium on information theory | 2013
Theodoros Tsiligkaridis; Alfred O. Hero
This paper presents a new method for estimating high dimensional covariance matrices. Our method, permuted rank-penalized least-squares (PRLS), is based on Kronecker product series expansions of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. For covariance matrices of low separation rank, our results establish that PRLS has significantly faster convergence than the standard sample covariance matrix (SCM) estimator. In addition, this framework allows one to tradeoff estimation error for approximation error, thus providing a scalable covariance estimation framework in terms of separation rank, an analog to low rank approximation of covariance matrices [1]. The MSE convergence rates generalize the high dimensional rates recently obtained for the ML Flip-flop algorithm [2], [3].
IEEE Transactions on Signal Processing | 2013
Theodoros Tsiligkaridis; Alfred O. Hero; Shuheng Zhou
IEEE Transactions on Audio, Speech, and Language Processing | 2013
Theodoros Tsiligkaridis; Etienne Marcheret; Vaibhava Goel