Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam S. Charles is active.

Publication


Featured researches published by Adam S. Charles.


IEEE Journal of Selected Topics in Signal Processing | 2011

Learning Sparse Codes for Hyperspectral Imagery

Adam S. Charles; Bruno A. Olshausen; Christopher J. Rozell

The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.


conference on information sciences and systems | 2011

Sparsity penalties in dynamical system estimation

Adam S. Charles; M. Salman Asif; Justin K. Romberg; Christopher J. Rozell

In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a one-step update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.


Neural Computation | 2014

Short-term memory capacity in networks via the restricted isometry property

Adam S. Charles; Han Lun Yap; Christopher J. Rozell

Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.


Neural Computation | 2012

A common network architecture efficiently implements a variety of sparsity-based inference problems

Adam S. Charles; Pierre J. Garrigues; Christopher J. Rozell

The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate norms (), modified -norms, block- norms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.


international conference on acoustics, speech, and signal processing | 2011

Estimation and dynamic updating of time-varying signals with sparse variations

M. Salman Asif; Adam S. Charles; Justin K. Romberg; Christopher J. Rozell

This paper presents an algorithm for an ℓ1-regularized Kalman filter. Given observations of a discrete-time linear dynamical system with sparse errors in the state evolution, we estimate the state sequence by solving an optimization algorithm that balances fidelity to the measurements (measured by the standard ℓ2 norm) against the sparsity of the innovations (measured using the ℓ1 norm). We also derive an efficient algorithm for updating the estimate as the system evolves. This dynamic updating algorithm uses a homotopy scheme that tracks the solution as new measurements are slowly worked into the system and old measurements are slowly removed. The effective cost of adding new measurements is a number of low-rank updates to the solution of a linear system of equations that is roughly proportional to the joint sparsity of all the innovations in the time interval of interest.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2012

Low Power Sparse Approximation on Reconfigurable Analog Hardware

Samuel A. Shapero; Adam S. Charles; Christopher J. Rozell; Paul E. Hasler

Compressed sensing is an important application in signal and image processing which requires solving nonlinear optimization problems. A Hopfield-Network-like analog system is proposed as a solution, using the locally competitive algorithm (LCA) to solve an overcomplete l1 sparse approximation problem. A scalable system architecture using sub-threshold currents is described, including vector matrix multipliers (VMMs) and a nonlinear thresholder. A 4 × 6 nonlinear system is implemented on the RASP 2.9v chip, a field programmable analog array with directly programmable floating gate elements, allowing highly accurate VMMs. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% rms, and an objective value only 1.3% higher on average. The active circuit consumed 29 μA of current at 2.4 V, and converges on solutions in 240 μs. A smaller 2 × 3 system is also implemented. Extrapolating the scaling trends to a N=1000 node system, the analog LCA compares favorably with state-of-the-art digital solutions, using a small fraction of the power to arrive at solutions ten times faster. Finally, we provide simulations of large scale systems to show the behavior of the system scaled to nontrivial problem sizes.


IEEE Geoscience and Remote Sensing Letters | 2014

Spectral Superresolution of Hyperspectral Imagery Using Reweighted

Adam S. Charles; Christopher J. Rozell

Sparsity-based models have enabled significant advances in many image processing tasks. Hyperspectral imagery (HSI) in particular has benefited from these approaches due to the significant low-dimensional structure in both spatial and spectral dimensions. Specifically, previous work has shown that sparsity models can be used for spectral superresolution, where spectral signatures with HSI-level resolution are recovered from measurements with multispectral-level resolution (i.e., an order of magnitude fewer spectral bands). In this letter, we expand on those results by introducing a new inference approach known as reweighted l1 spatial filtering (RWL1-SF). RWL1-SF incorporates a more sophisticated signal model that allows for variations in the SNR at each pixel as well as spatial dependences between neighboring pixels. The results demonstrate that the proposed approach leverages signal structure beyond simple sparsity to achieve significant improvements in spectral superresolution.


international conference on acoustics, speech, and signal processing | 2013

\ell_{1}

Adam S. Charles; Christopher J. Rozell

Accurate estimation of undersampled time-varying signals improves as stronger signal models provide more information to aid the estimator. In class Kalman filter-type algorithms, dynamic models of signal evolution are highly leveraged but there is little exploitation of structure within a signal at a given time. In contrast, standard sparse approximation schemes (e.g., L1 minimization) utilize strong structural models for a single signal, but do not admit obvious ways to incorporate dynamic models for data streams. In this work we introduce a causal estimation algorithm to estimate time-varying sparse signals. This algorithm is based on a hierarchical probabilistic model that uses re-weighted L1 minimization as its core computation, and propagates second order statistics through time similar to classic Kalman filtering. The resulting algorithm achieves very good performance, and appears to be particularly robust to errors in the dynamic signal model.


asilomar conference on signals, systems and computers | 2010

Spatial Filtering

Adam S. Charles; Bruno A. Olshausen; Christopher J. Rozell

The growing use of hyperspectral imagery lead us to seek automated algorithms for extracting useful information about the scene. Recent work in sparse approximation has shown that unsupervised learning techniques can use example data to determine an efficient dictionary with few a priori assumptions. We apply this model to sample hyperspectral data and show that these techniques learn a dictionary that: 1) contains a meaningful spectral decomposition for hyperspectral imagery, 2) admit representations that are useful in determining properties and classifying materials in the scene, and 3) forms local approximations to the nonlinear manifold structure present in the actual data.


ieee signal processing workshop on statistical signal processing | 2012

Dynamic filtering of sparse signals using reweighted ℓ 1

Han Lun Yap; Adam S. Charles; Christopher J. Rozell

The ability of networked systems (including artificial or biological neuronal networks) to perform complex data processing tasks relies in part on their ability to encode signals from the recent past in the current network state. Here we use Compressed Sensing tools to study the ability of a particular network architecture (Echo State Networks) to stably store long input sequences. In particular, we show that such networks satisfy the Restricted Isometry Property when the input sequences are compressible in certain bases and when the number of nodes scale linearly with the sparsity of the input sequence and logarithmically with its dimension. Thus, the memory capacity of these networks depends on the input sequence statistics, and can (sometimes greatly) exceed the number of nodes in the network. Furthermore, input sequences can be robustly recovered from the instantaneous network state using a tractable optimization program (also implementable in a network architecture).

Collaboration


Dive into the Adam S. Charles's collaboration.

Top Co-Authors

Avatar

Christopher J. Rozell

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Han Lun Yap

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas P. Bertrand

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dong Yin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Justin K. Romberg

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge