Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher J. Rozell is active.

Publication


Featured researches published by Christopher J. Rozell.


Neural Computation | 2008

Sparse coding via thresholding and local competition in neural circuits

Christopher J. Rozell; Don H. Johnson; Richard G. Baraniuk; Bruno A. Olshausen

While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholding functions to induce local (usually one-way) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms.


IEEE Journal of Selected Topics in Signal Processing | 2011

Learning Sparse Codes for Hyperspectral Imagery

Adam S. Charles; Bruno A. Olshausen; Christopher J. Rozell

The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.


conference on information sciences and systems | 2011

Sparsity penalties in dynamical system estimation

Adam S. Charles; M. Salman Asif; Justin K. Romberg; Christopher J. Rozell

In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a one-step update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.


IEEE Transactions on Neural Networks | 2012

Convergence and Rate Analysis of Neural Networks for Sparse Approximation

Aurele Balavoine; Justin K. Romberg; Christopher J. Rozell

We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.


IEEE Transactions on Signal Processing | 2011

Concentration of Measure for Block Diagonal Matrices With Applications to Compressive Signal Processing

Jae Young Park; Han Lun Yap; Christopher J. Rozell; Michael B. Wakin

Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. Concentration of measure results are well established for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many real-world acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as analytical and empirical investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. We also discuss applications of these results in ensuring stable embeddings for various signal families and in establishing performance guarantees for solving various signal processing tasks (such as detection and classification) directly in the compressed domain.


conference on information sciences and systems | 2011

The Restricted Isometry Property for block diagonal matrices

Han Lun Yap; Armin Eftekhari; Michael B. Wakin; Christopher J. Rozell

In compressive sensing (CS), the Restricted Isometry Property (RIP) is a powerful condition on measurement operators which ensures robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. Early papers in CS showed that Gaussian random matrices satisfy the RIP with high probability, but such matrices are usually undesirable in practical applications due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. To alleviate some or all of these difficulties, recent research efforts have focused on structured random matrices. In this paper, we study block diagonal measurement matrices where each block on the main diagonal is itself a Gaussian random matrix. The main result of this paper shows that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on the coherence of the basis in which the signals are sparse. In the best case—for signals that are sparse in the frequency domain—these matrices perform nearly as well as dense Gaussian random matrices despite having many fewer nonzero entries.


Proceedings of SPIE | 2007

Modeling sensor networks with fusion frames

Peter G. Casazza; Gitta Kutyniok; Shidong Li; Christopher J. Rozell

The new notion of fusion frames will be presented in this article. Fusion frames provide an extensive framework not only to model sensor networks, but also to serve as a means to improve robustness or develop efficient and feasible reconstruction algorithms. Fusion frames can be regarded as sets of redundant subspaces each of which contains a spanning set of local frame vectors, where the subspaces have to satisfy special overlapping properties. Main aspects of the theory of fusion frames will be presented with a particular focus on the design of sensor networks. New results on the construction of Parseval fusion frames will also be discussed.


PLOS Computational Biology | 2013

Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System

Mengchen Zhu; Christopher J. Rozell

Extensive electrophysiology studies have shown that many V1 simple cells have nonlinear response properties to stimuli within their classical receptive field (CRF) and receive contextual influence from stimuli outside the CRF modulating the cells response. Models seeking to explain these non-classical receptive field (nCRF) effects in terms of circuit mechanisms, input-output descriptions, or individual visual tasks provide limited insight into the functional significance of these response properties, because they do not connect the full range of nCRF effects to optimal sensory coding strategies. The (population) sparse coding hypothesis conjectures an optimal sensory coding approach where a neural population uses as few active units as possible to represent a stimulus. We demonstrate that a wide variety of nCRF effects are emergent properties of a single sparse coding model implemented in a neurally plausible network structure (requiring no parameter tuning to produce different effects). Specifically, we replicate a wide variety of nCRF electrophysiology experiments (e.g., end-stopping, surround suppression, contrast invariance of orientation tuning, cross-orientation suppression, etc.) on a dynamical system implementing sparse coding, showing that this model produces individual units that reproduce the canonical nCRF effects. Furthermore, when the population diversity of an nCRF effect has also been reported in the literature, we show that this model produces many of the same population characteristics. These results show that the sparse coding hypothesis, when coupled with a biophysically plausible implementation, can provide a unified high-level functional interpretation to many response properties that have generally been viewed through distinct mechanistic or phenomenological models.


International Journal of Neural Systems | 2014

OPTIMAL SPARSE APPROXIMATION WITH INTEGRATE AND FIRE NEURONS

Samuel A. Shapero; Mengchen Zhu; Jennifer O. Hasler; Christopher J. Rozell

Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.


international conference on image processing | 2007

Locally Competitive Algorithms for Sparse Approximation

Christopher J. Rozell; Don H. Johnson; Richard G. Baraniuk; Bruno A. Olshausen

Practical sparse approximation algorithms (particularly greedy algorithms) suffer two significant drawbacks: they are difficult to implement in hardware, and they are inefficient for time-varying stimuli (e.g., video) because they produce erratic temporal coefficient sequences. We present a class of locally competitive algorithms (LCAs) that correspond to a collection of sparse approximation principles minimizing a weighted combination of reconstruction MSE and a coefficient cost function. These systems use thresholding functions to induce local nonlinear competitions in a dynamical system. Simple analog hardware can implement the required nonlinearities and competitions. We show that our LCAs are stable under normal operating conditions and can produce sparsity levels comparable to existing methods. Additionally, these LCAs can produce coefficients for video sequences that are more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms.

Collaboration


Dive into the Christopher J. Rozell's collaboration.

Top Co-Authors

Avatar

Adam S. Charles

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han Lun Yap

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abigail Anne Kressner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin K. Romberg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aurele Balavoine

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mengchen Zhu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge