Pavel Kisilev
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pavel Kisilev.
international conference on image processing | 2001
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi
The blind source separation problem is concerned with extraction of the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. It was discovered recently, that exploiting the sparsity of sources in their representation according to some signal dictionary, dramatically improves the quality of separation. It is especially useful in image processing problems, wherein signals possess strong spatial sparsity. We use multiscale transforms, such as wavelet or wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We use this intrinsic property for selecting the best (most sparse) subsets of features for further separation. Experiments with 1D signals and images demonstrate significant improvement of separation quality.
international conference on image processing | 2001
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi
A classical technique for reconstruction of emission tomography (ET) images from measured projections is based on the maximum likelihood (ML) estimation, achieved with the expectation maximization (EM) algorithm. We incorporate the wavelet transform (WT) and total variation (TV) based penalties into the ML framework, and compare performance of the EM algorithm and the previously proposed conjugate barrier (CB) algorithm. Using the WT- and TV-based penalties allows one to embed regularization procedures into the iterative process. In the case of the WT-based penalty, we impose a subset of wavelet coefficients with a desired resolution on the objective function. It appears that the CB algorithm outperforms substantially the EM algorithm in penalized reconstruction. The properties of the optimization algorithms along with WTand TV-based regularization are demonstrated on image reconstructions of a synthetic brain phantom, and the quality of reconstruction is compared with standard methods.
international conference of the ieee engineering in medicine and biology society | 2000
Pavel Kisilev; Matthew W. Jacobson; Yehoshua Y. Zeevi
A classic technique for reconstruction of Positive Emission Tomography (PET) images from measured projections is based on the maximum likelihood (ML) parameter estimation along with the Expectation Maximization (EM) algorithm. The authors incorporate the Wavelet transform (WT) into the ML framework, and obtain a new iterative algorithm that incorporates local and multiresolution properties of the WT within the structure of the EM. Using the WT allows one to embed regularization procedures (filtering) into the iterative process, by imposing a new set of parameters on a subset of wavelet coefficients with a desired resolution. Properties of the proposed algorithm are demonstrated on reconstructions of a synthetic brain phantom.
Optical Science and Technology, SPIE's 48th Annual Meeting | 2003
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi
It was previously shown that sparse representations can improve and simplify the estimation of an unknown mixing matrix of a set of images and thereby improve the quality of separation of source images. Here we propose a multiscale approach to the problem of blind separation of images from a set of their mixtures. We take advantage of the properties of multiscale transforms such as wavelet packets and decompose signals and images according to sets of local features. The resulting partial representations on a tree of data structure depict various degrees of sparsity. We show how the separation error is affected by the sparsity of the decomposition coefficients, and by the misfit between the prior, formulated in accordance with the probabilistic model of the coefficients distribution, and the actual distribution of the coefficients. Our error estimator, based on the Taylor expansion of the quasi Log-Likelihood function, is used in selection of the best subsets of coefficients, utilized in turn for further separation. The performance of the proposed method is assessed by separation of noise-free and noisy data. Experiments with simulated and real signals and images demonstrate significant improvement of separation quality over previously reported results.
visual communications and image processing | 2003
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi
It was previously shown that sparse representations can improve and simplify the estimation of an unknown mixing matrix of a set of images and thereby improve the quality of separation of source images. Here we propose a multiscale approach to the problem of blind separation of images from a set of their mixtures. We take advantage of the properties of multiscale transforms such as wavelet packets and decompose signals and images according to sets of local features. The resulting partial representations on a tree of data structure depict various degrees of sparsity. We show how the separation error is affected by the sparsity of the decomposition coefficients, and by the misfit between the prior, formulated in accordance with the probabilistic model of the coefficients distribution, and the actual distribution of the coefficients. Our error estimator, based on the Taylor expansion of the quasi Log-Likelihood function, is used in selection of the best subsets of coefficients, utilized in turn for further separation. The performance of the proposed method is assessed by separation of noise-free and noisy data. Experiments with simulated and real signals and images demonstrate significant improvement of separation quality over previously reported results.
Proceedings of SPIE | 2001
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi; Barak A. Pearlmutter
The concern of the blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. It was discovered recently, that use of sparsity of source representation in some signal dictionary dramatically improves the quality of separation. In this work we use the property of multiscale transforms, such as wavelet or wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We use this intrinsic property for selecting the best (most sparse) subsets of features for further separation. Experiments with simulated signals, musical sounds and images demonstrate significant improvement of separation quality.
international conference of the ieee engineering in medicine and biology society | 1999
Pavel Kisilev; Yehoshua Y. Zeevi; Hillel Pratt
An algorithm for estimation and reconstruction of Evoked Potentials (EP), masked by EEG activity and additional sources of clutter and noise is proposed. The local Karhunen-Loeve (LKL) basis, derived from the noisy EP signal, is used for optimal signal representation. The vector space of the noisy signal is decomposed by the LKL transform into two complementary orthogonal subspaces. The EP signal is estimated by modifying the signal subspace components with a Wiener gain function.
mediterranean electrotechnical conference | 1998
Pavel Kisilev; Yehoshua Y. Zeevi; Hillel Pratt
An algorithm for the estimation and reconstruction of event related signals corrupted by colored noise (e.g. evoked potentials, masked by EEG activity and additional sources of clutter and noise) is proposed. The local Karhunen-Loeve (LKL) basis, derived from the noisy signal local autocorrelation function, is used for optimal signal representation (in minimum mean squared error-MMSE-sense). The vector space of the noisy signal is decomposed by the LKL transform into the corresponding complementary orthogonal subspaces, i.e. the signal-plus-noise and the noise only. The event related signal is estimated from the signal-plus-noise subspace by modifying the corresponding LKL components with a Wiener-like gain function.
Neural Computation | 2001
Michael Zibulevsky; Barak A. Pearlmutter; Pau Bofill; Pavel Kisilev
Journal of Machine Learning Research | 2003
Pavel Kisilev; Michael Zibulevsky; Yehoshua Y. Zeevi