Eva L. Dyer
Rice University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eva L. Dyer.
international test conference | 2010
Mehrdad Majzoobi; Eva L. Dyer; Ahmed Elnably; Farinaz Koushanfar
This paper introduces a set of novel techniques for rapid post-silicon characterization of FPGA timing variability. The existing built-in self-test (BIST) methods work by incrementing the clock frequency until timing failures occur within the combinational circuit-under-test (CUT). A standing challenge for industrial adoption of post-silicon device profiling by this method is the time required for the characterization process. To perform rapid and accurate delay characterization, we introduce a number of techniques to rapidly scan the CUTs while changing the clock frequency using off-chip and on-chip clock synthesis modules. We next find a compact parametric representation of the CUT timing failure probability. Using this representation, the minimum number of frequency samples is determined to accurately estimate the delay for each CUT within the 2D FPGA array. After that, we exploit the spatial correlation of the delays across the FPGA die to measure a small subset of CUT delays from an array of CUTs and recover the remaining entries with high accuracy. Our implementation and evaluations on Xilinx Virtex 5 FPGA demonstrate that the combination of the new techniques reduces the characterization timing overhead by at least three orders of magnitude while simultaneously reducing the required storage requirements.
arXiv: Quantitative Methods | 2017
Eva L. Dyer; William Gray Roncal; Judy A. Prasad; Hugo L. Fernandes; Doga Gursoy; Vincent De Andrade; Kamel Fezzaa; Xianghui Xiao; Joshua T. Vogelstein; Chris Jacobsen; Konrad P. Körding; Narayanan Kasthuri
Visual Abstract Methods for resolving the three-dimensional (3D) microstructure of the brain typically start by thinly slicing and staining the brain, followed by imaging numerous individual sections with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography (µCT) for producing mesoscale (∼1 µm 3 resolution) brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for µCT-based brain mapping that develops and integrates methods for sample preparation, imaging, and automated segmentation of cells, blood vessels, and myelinated axons, in addition to statistical analyses of these brain structures. Our results demonstrate that X-ray tomography achieves rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts.
Scientific Reports | 2018
Xiaogang Yang; Vincent De Andrade; William Scullin; Eva L. Dyer; Narayanan Kasthuri; Francesco De Carlo; Doga Gursoy
Synchrotron-based X-ray tomography offers the potential for rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short-exposure-time projections enhanced with CNNs show signal-to-noise ratios similar to long-exposure-time projections. They also show lower noise and more structural information than low-dose short-exposure acquisitions post-processed by other techniques. We evaluated this approach using simulated samples and further validated it with experimental data from radiation sensitive mouse brains acquired in a tomographic setting with transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in low-dose datasets enhanced with CNN. This method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens
international conference on acoustics, speech, and signal processing | 2013
Eva L. Dyer; Christoph Studer; Richard G. Baraniuk
Unions of subspaces have recently been shown to provide a compact nonlinear signal model for collections of high-dimensional data, such as large collections of images or videos. In this paper, we introduce a novel data-driven algorithm for learning unions of subspaces directly from a collection of data; our approach is based upon forming minimum ℓ2-norm (least-squares) representations of a signal with respect to other signals in the collection. The resulting representations are then used as feature vectors to cluster the data in accordance with each signals subspace membership. We demonstrate that the proposed least-squares approach leads to improved classification performance when compared to state-of-the-art subspace clustering methods on both synthetic and real-world experiments. This study provides evidence that using least-squares methods to form data-driven representations of collections of data provide significant advantages over current methods that rely upon sparse representations.
international ieee/embs conference on neural engineering | 2013
Eva L. Dyer; Christoph Studer; Jacob T. Robinson; Richard G. Baraniuk
In a variety of neural data analysis problems, “neural events” such as action potentials (APs) or post-synaptic potentials (PSPs), must be recovered from noisy and possibly corrupted measurements. For instance, in calcium imaging, an AP or group of APs generate a stereotyped calcium signal with a quick rise time and slow decay. In this work, we develop a general-purpose method for: (i) learning a template waveform that signifies the presence of a neural event and (ii) neural event recovery to determine the times at which such events occur. Our approach is based upon solving a sparse signal separation problem to separate the neural signal of interest from any noise and other corruptions that arise due to baseline drift, measurement noise, and breathing/motion artifacts. For both synthetic and real measured data, we demonstrate that our approach accurately learns the underlying template waveform and detects neural events, even in the presence of strong amounts of noise and corruption. The methods robustness, simplicity, and computational efficiency makes it amenable for use in the analysis of data arising in large-scale studies of both time-varying calcium imaging and whole-cell electrophysiology.
international conference on latent variable analysis and signal separation | 2010
Eva L. Dyer; Marco F. Duarte; Don H. Johnson; Richard G. Baraniuk
Two-photon calcium imaging is an emerging experimental technique that enables the study of information processing within neural circuits in vivo. While the spatial resolution of this technique permits the calcium activity of individual cells within the field of view to be monitored, inferring the precise times at which a neuron emits a spike is challenging because spikes are hidden within noisy observations of the neurons calcium activity. To tackle this problem, we introduce the use of sparse approximation methods for recovering spikes from the time-varying calcium activity of neurons. We derive sufficient conditions for exact recovery of spikes with respect to (i) the decay rate of the spike-evoked calcium event and (ii) the maximum firing rate of the cell under test. We find--both in theory and in practice--that standard sparse recovery methods are not sufficient to recover spikes from noisy calcium signals when the firing rate of the cell is high, suggesting that in order to guarantee exact recovery of spike times, additional constraints must be incorporated into the recovery procedure. Hence, we introduce an iterative framework for structured sparse approximation that is capable of achieving superior performance over standard sparse recovery methods by taking into account knowledge that spikes are non-negative and also separated in time. We demonstrate the utility of our approach on simulated calcium signals in various amounts of additive Gaussian noise and under different degrees of model mismatch.
bioRxiv | 2018
Theodore J. LaGrow; Michael G. Moore; Judy A. Prasad; Alexis Webber; Mark A. Davenport; Eva L. Dyer
ABSTRACT Robust methods for characterizing the cellular architecture (cytoarchitecture) of the brain are needed to differentiate brain areas, identify neurological diseases, and model architectural differences across species. Current methods for mapping the cytoarchitecture and, in particular, identifying laminar (layer) divisions in tissue samples require the expertise of trained neuroanatomists to manually annotate the various regions-of-interest and cells within an image. However, as neuroanatomical datasets grow in volume, manual annotations become inefficient, impractical, and risk the potential of biasing results. In this paper, we propose an automated framework for cellular detection and density estimation that enables the detection of laminar divisions within retinal and neocortical histology datasets. Our approach for layer detection uses total variation minimization to find a small number of change points in the density that signify the beginning and end of each layer. We apply these methods to micron-scale histology images from a variety of cortical areas of the mouse brain and retina, as well as synthetic datasets. Our results demonstrate the feasibility of using automation to reveal the cytoarchitecture of neurological samples in high-resolution images.A robust method for quantifying the cellular architecture (cytoarchitecture) of the brain is a requisite for differentiating brain areas, identifying neurological diseases, and modeling architectural differences across species. Current methods for characterizing cytoarchitecture and, in particular, identifying laminar (layer) divisions in tissue samples, require the expertise of trained neuroanatomists to manually annotate the various regions within each image. However, as neuroanatomical datasets grow in volume, manual annotations become inefficient, impractical, and risk the potential of biasing results. In this paper, we propose an automated framework for cellular density estimation and detection of laminar divisions within retinal and neocortical datasets. This method is based upon the use of sparse recovery methods to simultaneously denoise cellular densities and detect transitions in the density which mark the beginning and end of layers. Retinal and neocortical images are used to demonstrate the efficacy of the methods. These results demonstrate the feasibility of using automation to reveal the cytoarchitecture of neurological samples in high-resolution images.
PLOS ONE | 2018
Timothy J. Lee; Aditi Kumar; Aishwarya H. Balwani; Derrick Brittain; Sam Kinn; Craig A. Tovey; Eva L. Dyer; Nuno Maçarico da Costa; R. Clay Reid; Craig R. Forest; Daniel J. Bumbarger
Serial section transmission electron microscopy (ssTEM) is the most promising tool for investigating the three-dimensional anatomy of the brain with nanometer resolution. Yet as the field progresses to larger volumes of brain tissue, new methods for high-yield, low-cost, and high-throughput serial sectioning are required. Here, we introduce LASSO (Loop-based Automated Serial Sectioning Operation), in which serial sections are processed in “batches.” Batches are quantized groups of individual sections that, in LASSO, are cut with a diamond knife, picked up from an attached waterboat, and placed onto microfabricated TEM substrates using rapid, accurate, and repeatable robotic tools. Additionally, we introduce mathematical models for ssTEM with respect to yield, throughput, and cost to access ssTEM scalability. To validate the method experimentally, we processed 729 serial sections of human brain tissue (~40 nm x 1 mm x 1 mm). Section yield was 727/729 (99.7%). Sections were placed accurately and repeatably (x-direction: -20 ± 110 μm (1 s.d.), y-direction: 60 ± 150 μm (1 s.d.)) with a mean cycle time of 43 s ± 12 s (1 s.d.). High-magnification (2.5 nm/px) TEM imaging was conducted to measure the image quality. We report no significant distortion, information loss, or substrate-derived artifacts in the TEM images. Quantitatively, the edge spread function across vesicle edges and image contrast were comparable, suggesting that LASSO does not negatively affect image quality. In total, LASSO compares favorably with traditional serial sectioning methods with respect to throughput, yield, and cost for large-scale experiments, and represents a flexible, scalable, and accessible technology platform to enable the next generation of neuroanatomical studies.
BMC Neuroscience | 2012
Eva L. Dyer; Ueli Rutishauser; Richard G. Baraniuk
Degeneracy is a ubiquitous feature of computation and coding in biological systems. Degenerate codes—codes in which multiple code words have the same meaning or interpretation—arise in a wide range of biological processes, from the many-to-one mapping of codons to amino acids to the numerous instances of degenerate coding in the nervous system, both in the periphery and in the cortex. There are a number of reasons why neural systems might seek some degree of degeneracy; by enabling a number of inputs (stimuli) to be mapped to the same output code, the system is endowed with a certain level of robustness to noise and to cell death. Furthermore, degenerate codes provide invariance to certain types of variability in the raw sensory input. This is particularly useful in object recognition for instance, where despite photometric variability, the representation of an object under different illumination conditions should be equivalent at some level of processing even if the neural representations for these states are slightly different. In tandem with natures desire to produce degeneracy and invariance in the brain’s representation of sensory inputs, sparsity also appears to also play an important role in neural coding; a sparse code is one that requires a small number of active neurons relative to the size of the population. Sparse codes are also associated with a high degree of specificity: each cell in the population only responds to a limited number of inputs. Sparse coding has been observed in the visual cortex of macaque, mushroom body in locusts, and auditory cortex in grasshoppers. Experimental studies suggest that neural systems might seek to strike an appropriate balance between the degeneracy and the sparsity of the neural code. Here, we describe a framework in which one can tradeoff between these two objectives in a natural way. In contrast to models for global sparse coding such as the locally competitive architectures in LCA [1] which find a population code that captures the primary features of the stimulus while minimizing the number of active neurons in the population, our goal is to find a representation of the sensory input that minimizes the number of ‘groups’ that must be active to encode the input. The idea is that when the excitatory cells are grouped in a meaningful way, the activation of a small set of groups might have a particular meaning to the organism. If we view the activation of these ‘functional groups’ as a high-level or coarse scale representation of the stimulus, the resulting code is stable to a wide-range of perturbations in the input space. This two-level representation (the coarse and the fine) provides a robust and degenerate mapping of the input space that is also sparse. After motivating the utility of group sparse coding in neural systems, we show how one can implement these sparse coding networks by coupling a collection of winner-take-all (WTAs) networks that were first introduced for state-dependent computation in [2]. WTAs are particularly attractive as they represent neurally plausible microcircuits that have been hypothesized to serve as a computational primitive in complex cortical networks. An interesting property of these microcircuits is the fact that a single inhibitory unit delivers a common inhibitory signal to all of the excitatory units within the WTA. This architecture is in stark contrast the highly specified point-to-point inhibition structure required to faithfully implement a LCA, i.e., the inhibition between every pair of cells can be different. To solve sparse coding problems with collections of WTAs, we adapt the analytical approach in [1] and show how one can couple a collection of WTAs to descend an energy function that promotes grouped sparse representations. In addition to building in invariances into the code, we demonstrate that there are a number of computational advantages to employing this type of grouped network model for sparse coding. First, we demonstrate that we require fewer “long-range” connections to produce group sparse codes with collections of WTAs than that required for a global LCA. Second, the lateral and recurrent inhibition in the network modulates the threshold of excitatory units collectively: this means that fewer interneurons are required to produce a representation, the number of long range messages which must be sent is reduced, and the network converges to a solution faster than the global LCA.
design automation conference | 2011
Eva L. Dyer; Mehrdad Majzoobi; Farinaz Koushanfar
Accurate characterization of spatial variation is essential for statistical performance analysis and modeling, post-silicon tuning, and yield analysis. Existing approaches for spatial modeling either assume that: (i) non-stationarities arise due to a smoothly varying trend component or that (ii) the process is stationary within regions associated with a predefined grid. While such assumptions may hold when profiling certain classes of variations, a number of recent modeling studies suggest that non-stationarities arise from both shifts in the process mean as well as fluctuations in the variance of the process. In order to provide a compact model for non-stationary process variations, we introduce a new hybrid spatial modeling framework that models the spatially varying random field as a union of non-overlapping rectangular regions where the process is assumed to be locally-stationary within each region. To estimate the parameters in our hybrid spatial model, we develop a host of techniques to both estimate the change-points in the random field and to find an appropriate partitioning of the chip into disjoint regions where the field is locally-stationary. We verify our models and results on measurements collected from 65nm FPGAs.