Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haris Vikalo is active.

Publication


Featured researches published by Haris Vikalo.


IEEE Transactions on Wireless Communications | 2004

Iterative decoding for MIMO channels via modified sphere decoding

Haris Vikalo; Babak Hassibi

In recent years, soft iterative decoding techniques have been shown to greatly improve the bit error rate performance of various communication systems. For multiantenna systems employing space-time codes, however, it is not clear what is the best way to obtain the soft information required of the iterative scheme with low complexity. In this paper, we propose a modification of the Fincke-Pohst (sphere decoding) algorithm to estimate the maximum a posteriori probability of the received symbol sequence. The new algorithm solves a nonlinear integer least squares problem and, over a wide range of rates and signal-to-noise ratios, has polynomial-time complexity. Performance of the algorithm, combined with convolutional, turbo, and low-density parity check codes, is demonstrated on several multiantenna channels. The results for systems that employ space-time modulation schemes seem to indicate that the best performing schemes are those that support the highest mutual information between the transmitted and received signals, rather than the best diversity gain.


IEEE Transactions on Signal Processing | 2005

On the sphere-decoding algorithm II. Generalizations, second-order statistics, and applications to communications

Haris Vikalo; Babak Hassibi

In Part I, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths.


IEEE Journal of Selected Topics in Signal Processing | 2008

Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays

Farzad Parvaresh; Haris Vikalo; Sidhant Misra; Babak Hassibi

Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity.


IEEE Transactions on Wireless Communications | 2006

Rate maximization in multi-antenna broadcast channels with linear preprocessing

Mihailo Stojnic; Haris Vikalo; Babak Hassibi

The sum rate capacity of the multi-antenna broadcast channel has recently been computed. However, the search for efficient practical schemes that achieve it is still ongoing. In this paper, we focus on schemes with linear preprocessing of the transmitted data. We propose two criteria for the preceding matrix design: one maximizing the sum rate and the other maximizing the minimum rate among all users. The latter problem is shown to be quasiconvex and is solved exactly via a bisection method. In addition to preceding, we employ a signal scaling scheme that minimizes the average bit-error-rate (BER). The signal scaling scheme is posed as a convex optimization problem, and thus can be solved exactly via efficient interior-point methods. In terms of the achievable sum rate, the proposed technique significantly outperforms traditional channel inversion methods, while having comparable (in fact, often superior) BER performance


conference on decision and control | 2010

Greedy sensor selection: Leveraging submodularity

Manohar Shamaiah; Siddhartha Banerjee; Haris Vikalo

We consider the problem of sensor selection in resource constrained sensor networks. The fusion center selects a subset of k sensors from an available pool of m sensors according to the maximum a posteriori or the maximum likelihood rule. We cast the sensor selection problem as the maximization of a submodular function over uniform matroids, and demonstrate that a greedy sensor selection algorithm achieves performance within (1 − 1 over e ) of the optimal solution. The greedy algorithm has a complexity of O(n3mk), where n is the dimension of the measurement space. The complexity of the algorithm is further reduced to O(n2mk) by exploiting certain structural features of the problem. An application to the sensor selection in linear dynamical systems where the fusion center employs Kalman filtering for state estimation is considered. Simulation results demonstrate the superior performance of the greedy sensor selection algorithm over competing techniques based on convex relaxation.


international conference on acoustics, speech, and signal processing | 2002

On the expected complexity of integer least-squares problems

Babak Hassibi; Haris Vikalo

The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary, but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore in this paper, rather than dwell on the worst-case complexity of the integer-least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the “sphere decoding” algorithm of Fincke and Pohst we find a closed-form expression for the expected complexity and show that for a wide range of noise variances the expected complexity is polynomial, in fact often sub-cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can in fact be implemented in realtime—a result with many practical implications.


Journal of Applied Physics | 2007

On noise processes and limits of performance in biosensors

Arjang Hassibi; Haris Vikalo; Ali Hajimiri

In this paper, we present a comprehensive stochastic model describing the measurement uncertainty, output signal, and limits of detection of affinity-based biosensors. The biochemical events within the biosensor platform are modeled by a Markov stochastic process, describing both the probabilistic mass transfer and the interactions of analytes with the capturing probes. To generalize this model and incorporate the detection process, we add noisy signal transduction and amplification stages to the Markov model. Using this approach, we are able to evaluate not only the output signal and the statistics of its fluctuation but also the noise contributions of each stage within the biosensor platform. Furthermore, we apply our formulations to define the signal-to-noise ratio, noise figure, and detection dynamic range of affinity-based biosensors. Motivated by the platforms encountered in practice, we construct the noise model of a number of widely used systems. The results of this study show that our formulations predict the behavioral characteristics of affinity-based biosensors which indicate the validity of the model.


EURASIP Journal on Advances in Signal Processing | 2002

Maximum-likelihood sequence detection of multiple antenna systems over dispersive channels via sphere decoding

Haris Vikalo; Babak Hassibi

Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory). Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE) algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations.


international symposium on low power electronics and design | 2000

Energy efficient design of portable wireless systems

Tajana Simunic; Haris Vikalo; Peter W. Glynn; G. De Micheli

Portable wireless systems require long battery lifetime while still delivering high performance. The major contribution of this work is combining new power management (PM) and power control (PC) algorithms to trade off performance for power consumption at the system level in portable devices. First we present the formulation for the solution of the PM policy optimization based on renewal theory. Next we present the formulation for power control (PC) of the wireless link that enables us to obtain further energy savings when the system is active. Finally, we discuss the measurements obtained for a set of PM and PC algorithms implemented for the WLAN card on a laptop. The PM policy we developed based on our renewal model consumes three times less power as compared to the default PM policy for the WLAN card with still high performance. Power control saves additional 53% in energy at same bit error rate. With both power control and power management algorithms in place, we observe on average a factor of six in power savings.


IEEE Transactions on Signal Processing | 2004

On the capacity of frequency- selective channels in training-based transmission schemes

Haris Vikalo; Babak Hassibi; Bertrand M. Hochwald

Communication systems transmitting over frequency-selective channels generally employ an equalizer to recover the transmitted sequence corrupted by intersymbol interference (ISI). Most practical systems use a training sequence to learn the channel impulse response and thereby design the equalizer. An important issue is determining the optimal amount of training: too little training and the channel is not learned properly, too much training and there is not enough time available to transmit data before the channel changes and must be learned anew. We use an information-theoretic approach to find the optimal parameters in training-based transmission schemes for channels described by a block-fading model. The optimal length of the training interval is found by maximizing a lower bound on the training-based channel capacity. When the transmitter is capable of providing two distinct transmission power levels (one for training and one for data transmission), the optimal length of the training interval is shown to be equal to the length of the channel. Further, we show that at high SNR, training-based schemes achieve the capacity of block-fading frequency selective channels, whereas at low SNR, they are highly suboptimal.

Collaboration


Dive into the Haris Vikalo's collaboration.

Top Co-Authors

Avatar

Babak Hassibi

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arjang Hassibi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Manohar Shamaiah

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Abolfazl Hashemi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shreepriya Das

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sriram Vishwanath

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Xiaohu Shen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Somsubhra Barik

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge