Vida Ravanmehr
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vida Ravanmehr.
IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2012
Bane Vasic; Vida Ravanmehr; Anantha Raman Krishnan
We introduce a class of finite systems models of gene regulatory networks exhibiting behavior of the cell cycle. The network is an extension of a Boolean network model. The system spontaneously cycles through a finite set of internal states, tracking the increase of an external factor such as cell mass, and also exhibits checkpoints in which errors in gene expression levels due to cellular noise are automatically corrected. We present a 7-gene network based on Projective Geometry codes, which can correct, at every given time, one gene expression error. The topology of a network is highly symmetric and requires using only simple Boolean functions that can be synthesized using genes of various organisms. The attractor structure of the Boolean network contains a single cycle attractor. It is the smallest nontrivial network with such high robustness. The methodology allows construction of artificial gene regulatory networks with the number of phases larger than in natural cell cycle.
IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2012
Vida Ravanmehr; Ludovic Danjean; Bane Vasic; David Declercq
We consider the Interval-Passing Algorithm (IPA), an iterative reconstruction algorithm for reconstruction of non-negative sparse real-valued signals from noise-free measurements. We first generalize the IPA by relaxing the original constraint that the measurement matrix must be binary. The new algorithm operates on any non-negative sparse measurement matrix. We give a performance comparison of the generalized IPA with the reconstruction algorithms based on 1) linear programming and 2) verification decoding. Then we identify signals not recoverable by the IPA on a given measurement matrix, and show that these signals are related to stopping sets responsible to failures of iterative decoding algorithms on the binary erasure channel (BEC). Contrary to the results of the iterative decoding on the BEC, the smallest stopping set of a measurement matrix is not the smallest configuration on which the IPA fails. We analyze the recovery of sparse signals on subsets of stopping sets via the IPA and provide sufficient conditions on the exact recovery of sparse signals. Reconstruction performance of the IPA using the IEEE 802.16e LDPC codes as measurement matrices are given to show the effect of stopping sets in the performance of the IPA.
information theory and applications | 2015
Bane Vasic; Predrag Ivanis; Srdan Brkic; Vida Ravanmehr
In this paper we present our recent results on iterative Gallager B decoder made of unreliable logic gates. We show evidence that probabilistic behavior of a decoder due to unreliable components can be exploited to our advantage and lead to an improved performance and reduced hardware redundancy. We provide examples of such decoder behavior and give an explanation of this phenomenon using iterative decoding dynamics. Iterative decoding can be viewed as a recursive procedure for Bethe free energy function minimization, and the randomness in a message update may help the decoder to escape from local minima. The decoder operates in a stochastic fashion, but the random perturbations do not require any additional hardware as they are built-in the faulty hardware itself.
applied sciences on biomedical and communication technologies | 2011
Vida Ravanmehr; Ludovic Danjean; David Declercq; Bane Vasic
We consider the iterative reconstruction of the Compressed Sensing (CS) problem over reals. The iterative reconstruction allows interpretation as a channel-coding problem, and it guarantees perfect reconstruction for properly chosen measurement matrices and sufficiently sparse error vectors. In this paper, we give a summary on reconstruction algorithms for compressed sensing and examine how the iterative reconstruction performs on quasi-cyclic low-density parity check (QC-LDPC) measurement matrices.
telecommunications forum | 2011
Ludovic Danjean; Vida Ravanmehr; David Declercq; Bane Vasic
In this paper we give an overview of current results in iterative reconstruction of sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices in compressed sensing. We provide a detailed explanation of two iterative reconstruction algorithms, Interval Passing (IP) algorithm and verification algorithm. We then compare their performance using parity check matrices of quasi-cyclic low-density parity check (QC-LDPC) codes with different column-weights and rates.
international symposium on information theory | 2014
Vida Ravanmehr; David Declercq; Bane Vasic
In this paper, we propose a new approach to constructing a class of check-hybrid generalized low-density parity-check (GLDPC) codes which are free of small trapping sets. This approach is based on converting selected checks of an LDPC code involving a trapping set to super checks corresponding to a shorter error correcting component code. In particular, we follow two goals in constructing the check-hybrid GLDPC codes: First, the super checks are replaced based on the knowledge of trapping sets of the global LDPC code. We show that by converting only some single checks to super checks the decoder corrects the errors on a trapping set and hence eliminates the trapping set. Second, the number of super checks required for eliminating certain trapping sets is minimized to reduce the rate-loss. We first give an algorithm to find a set of critical checks in a trapping set of an LDPC code and then we provide some upper bounds on the minimum number of critical checks needed to eliminate certain trapping sets in the parity-check matrix of an LDPC code. A possible fixed set for a class of check-hybrid codes is also given.
Bioinformatics | 2018
Vida Ravanmehr; Minji Kim; Zhiying Wang; Olgica Milenkovic
Motivation: Chromatin immunoprecipitation sequencing (ChIP‐seq) experiments are inexpensive and time‐efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP‐seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. Results: We tested the ChIPWig compressor on 10 ChIP‐seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6‐fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2‐fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. Availability and implementation: The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig‐v2, implemented in C ++. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
transactions on emerging telecommunications technologies | 2016
Vida Ravanmehr; Mehrdad Khatami; David Declercq; Bane Vasic
In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error correcting component code. Specifically, we follow two main purposes to construct the check-hybrid codes; first, based on the knowledge of the trapping sets of the global LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping (PBF) decoder corrects the errors on a trapping set and hence eliminates the trapping set. The second purpose is to minimize the rate loss caused by replacing the super checks through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of column-weight 3 LDPC code and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Moreover, we provide a fixed set for a class of constructed check-hybrid codes. The guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes. Finally, we investigate the eliminating of trapping sets of a column-weight 3 LDPC code using the Gallager B decoding algorithm and generalize the results obtained for the PBF for the Gallager B decoding algorithm.
international symposium on information theory | 2014
Mehrdad Khatami; Vida Ravanmehr; Bane Vasic
Two dimensional magnetic recording (TDMR) is a new paradigm in data storage which envisions densities up to 10 Tb/in2 as a result of drastically reducing bit to grain ratio. In order to reach this goal aggressive write (shingled writing) and read process are used in TDMR. Kavcic et al. proposed a simple magnetic grain model called the granular tiling model which captures the essence of read/write process in TDMR. Capacity bounds for this model indicate that 0.6 user bit per grain densities are possible, however, previous attempt to reach capacities are not close to the channel capacity. In this paper, we provide a truly two-dimensional detection scheme for the granular tiling model based on generalized belief propagation (GBP). Factor graph interpretation of the detection problem is provided and formulated in this paper. Then, GBP is employed to compute marginal a posteriori probabilities for the constructed factor graph. Simulation results show huge improvements in detection. A lower bound on the symmetric information rate (SIR) is also derived for this model based on GBP detector.
international symposium on turbo codes and iterative information processing | 2012
Vida Ravanmehr; Ludovic Danjean; Bane Vasic; David Declercq
This paper considers an iterative algorithm called the Interval-Passing Algorithm (IPA) which is used to reconstruct non-negative real signals using binary measurement matrices in compressed sensing (CS). The failures of the algorithm on stopping sets, also non-decodable configurations in iterative decoding of LDPC codes over the binary erasure channel (BEC), shows a connection between iterative reconstruction algorithm in CS and iterative decoding of LDPC codes over the BEC. In this paper, a stopping-set based approach is used to analyze the recovery of the IPA. We show that a smallest stopping set is not necessarily a smallest configuration on which the IPA fails and provide sufficient conditions under which the IPA recovers a sparse signal whose non-zero values lie on a subset of a stopping set. Reconstruction performance of the IPA using IEEE 802.16e LDPC measurement matrices are provided to show the effect of the stopping sets in the performance of the IPA.