Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ulugbek S. Kamilov is active.

Publication


Featured researches published by Ulugbek S. Kamilov.


IEEE Transactions on Signal Processing | 2012

Message-Passing De-Quantization With Applications to Compressed Sensing

Ulugbek S. Kamilov; Vivek K Goyal; Sundeep Rangan

Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal-sometimes greatly so. This paper develops message-passing de-quantization (MPDQ) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. The algorithm is based on generalized approximate message passing (GAMP), a recently-developed Gaussian approximation of loopy belief propagation for estimation with linear transforms and nonlinear componentwise-separable output channels. For MPDQ, scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. The algorithm is computationally simple and can incorporate arbitrary separable priors on the input vector including sparsity-inducing priors that arise in the context of compressed sensing. Moreover, under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performance of MPDQ can be accurately predicted and tracked through a simple set of scalar state evolution equations. We additionally use state evolution to design MSE-optimal scalar quantizers for MPDQ signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. In particular, our results show that non-regular quantization can greatly improve rate-distortion performance in some problems with oversampling or with undersampling combined with a sparsity-inducing prior.Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal—sometimes greatly so. This paper develops generalized approximate message passing (GAMP) alg orithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowi ng the linear expansion to be overcomplete or undercomplete and th e scalar quantization to be regular or non-regular. GAMP is a recently-developed class of algorithms that uses Gaussianpproximations in belief propagation and allows arbitrary separable input and output channels. Scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. Non-regular quantization is empirically demonstrated to greatly improve rate–distortion performance in some problems with oversampling or with undersampling combined with a sparsit yinducing prior. Under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performan ce of GAMP can be accurately predicted and tracked through the state evolution formalism. We additionally use state evolution t o design MSE-optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers.


neural information processing systems | 2014

Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning

Ulugbek S. Kamilov; Sundeep Rangan; Alyson K. Fletcher; Michael Unser

We consider the estimation of an independent and identically distributed (i.i.d.) (possibly non-Gaussian) vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. A novel method, called adaptive generalized approximate message passing (adaptive GAMP) is presented. It enables the joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. We prove that, for large i.i.d. Gaussian transform matrices, the asymptotic componentwise behavior of the adaptive GAMP is predicted by a simple set of scalar state evolution equations. In addition, we show that the adaptive GAMP yields asymptotically consistent parameter estimates, when a certain maximum-likelihood estimation can be performed in each step. This implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. Remarkably, this result applies to essentially arbitrary parametrizations of the unknown distributions, including nonlinear and non-Gaussian ones. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of linear-nonlinear models with provable guarantees.


arXiv: Optics | 2015

Learning approach to optical tomography

Ulugbek S. Kamilov; Ioannis N. Papadopoulos; Morteza H. Shoreh; Alexandre Goy; Cédric Vonesch; Michael Unser; Demetri Psaltis

Optical tomography has been widely investigated for biomedical imaging applications. In recent years optical tomography has been combined with digital holography and has been employed to produce high-quality images of phase objects such as cells. In this paper we describe a method for imaging 3D phase objects in a tomographic configuration implemented by training an artificial neural network to reproduce the complex amplitude of the experimentally measured scattered light. The network is designed such that the voxel values of the refractive index of the 3D object are the variables that are adapted during the training process. We demonstrate the method experimentally by forming images of the 3D refractive index distribution of Hela cells.


IEEE Signal Processing Letters | 2012

One-Bit Measurements With Adaptive Thresholds

Ulugbek S. Kamilov; Aurélien Bourquard; Arash Amini; Michael Unser

We introduce a new method for adaptive one-bit quantization of linear measurements and propose an algorithm for the recovery of signals based on generalized approximate message passing (GAMP). Our method exploits the prior statistical information on the signal for estimating the minimum-mean-squared error solution from one-bit measurements. Our approach allows the one-bit quantizer to use thresholds on the real line. Given the previous measurements, each new threshold is selected so as to partition the consistent region along its centroid computed by GAMP. We demonstrate that the proposed adaptive-quantization scheme with GAMP reconstruction greatly improves the performance of signal and image recovery from one-bit measurements.


IEEE Signal Processing Letters | 2012

Wavelet Shrinkage With Consistent Cycle Spinning Generalizes Total Variation Denoising

Ulugbek S. Kamilov; Emrah Bostan; Michael Unser

We introduce a new wavelet-based method for the implementation of Total-Variation-type denoising. The data term is least-squares, while the regularization term is gradient-based. The particularity of our method is to exploit a link between the discrete gradient and wavelet shrinkage with cycle spinning, which we express by using redundant wavelets. The redundancy of the representation gives us the freedom to enforce additional constraints (e.g., normalization) on the solution to the denoising problem. We perform optimization in an augmented-Lagrangian framework, which decouples the difficult n-dimensional constrained-optimization problem into a sequence of n easier scalar unconstrained problems that we solve efficiently via traditional wavelet shrinkage. Our method can handle arbitrary gradient-based regularizers. In particular, it can be made to adhere to the popular principle of least total variation. It can also be used as a maximum a posteriori estimator for a variety of priors. We illustrate the performance of our method for image denoising and for the statistical estimation of sparse stochastic processes.


international symposium on information theory | 2015

Inference for Generalized Linear Models via alternating directions and Bethe Free Energy minimization

Sundeep Rangan; Alyson K. Fletcher; Philip Schniter; Ulugbek S. Kamilov

Generalized Linear Models (GLMs), where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z = Ax arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can easily diverge for general transforms. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.


IEEE Transactions on Image Processing | 2013

Sparse Stochastic Processes and Discretization of Linear Inverse Problems

Emrah Bostan; Ulugbek S. Kamilov; Masih Nilchian; Michael Unser

We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and l1-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.


IEEE Signal Processing Letters | 2016

Learning Optimal Nonlinearities for Iterative Thresholding Algorithms

Ulugbek S. Kamilov; Hassan Mansour

Iterative shrinkage/thresholding algorithm (ISTA) is a well-studied method for finding sparse solutions to ill-posed inverse problems. In this letter, we present a data-driven scheme for learning optimal thresholding functions for ISTA. The proposed scheme is obtained by relating iterations of ISTA to layers of a simple feedforward neural network and developing a corresponding error backpropagation algorithm for fine-tuning the thresholding functions. Simulations on sparse statistical signals illustrate potential gains in estimation quality due to the proposed data adaptive ISTA.


international symposium on information theory | 2011

Optimal quantization for compressive sensing under message passing reconstruction

Ulugbek S. Kamilov; Vivek K Goyal; Sundeep Rangan

We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which generalizes approximate message passing (AMP) for arbitrary measurement channels. Its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism. We utilize these results to design mean-square optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers.


IEEE Transactions on Signal Processing | 2013

MMSE Estimation of Sparse Lévy Processes

Ulugbek S. Kamilov; Pedram Pad; Arash Amini; Michael Unser

We investigate a stochastic signal-processing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the well-known Brownian motion and piecewise-constant Poisson process; moreover, the Lévy family also contains other interesting members exhibiting heavy-tail statistics that fulfill the requirements of compressibility. We characterize the maximum-a-posteriori probability (MAP) and minimum mean-square error (MMSE) estimators for such signals. Interestingly, some of the MAP estimators for the Lévy model coincide with popular signal-denoising algorithms (e.g., total-variation (TV) regularization). We propose a novel non-iterative implementation of the MMSE estimator based on the belief-propagation (BP) algorithm performed in the Fourier domain. Our algorithm takes advantage of the fact that the joint statistics of general Lévy processes are much easier to describe by their characteristic function, as the probability densities do not always admit closed-form expressions. We then use our new estimator as a benchmark to compare the performance of existing algorithms for the optimal recovery of gradient-sparse signals.

Collaboration


Dive into the Ulugbek S. Kamilov's collaboration.

Top Co-Authors

Avatar

Michael Unser

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Petros T. Boufounos

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Dehong Liu

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Hassan Mansour

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Emrah Bostan

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hassan Mansour

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Demetri Psaltis

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Alexandre Goy

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ioannis N. Papadopoulos

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge