Florian Luisier
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Florian Luisier.
IEEE Transactions on Image Processing | 2007
Florian Luisier; Thierry Blu; Michael Unser
This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate-Steins unbiased risk estimate-that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random any more. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance-both regarding quality and CPU requirement-by comparing it with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree
IEEE Transactions on Image Processing | 2007
Thierry Blu; Florian Luisier
We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error-Steins unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain-typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms-this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a ldquotightrdquo frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes-linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.
IEEE Transactions on Image Processing | 2011
Florian Luisier; Thierry Blu; Michael Unser
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Signal Processing | 2010
Florian Luisier; Cédric Vonesch; Thierry Blu; Michael Unser
We present a fast algorithm for image restoration in the presence of Poisson noise. Our approach is based on (1) the minimization of an unbiased estimate of the MSE for Poisson noise, (2) a linear parametrization of the denoising process and (3) the preservation of Poisson statistics across scales within the Haar DWT. The minimization of the MSE estimate is performed independently in each wavelet subband, but this is equivalent to a global image-domain MSE minimization, thanks to the orthogonality of Haar wavelets. This is an important difference with standard Poisson noise-removal methods, in particular those that rely on a non-linear preprocessing of the data to stabilize the variance. Our non-redundant interscale wavelet thresholding outperforms standard variance-stabilizing schemes, even when the latter are applied in a translation-invariant setting (cycle-spinning). It also achieves a quality similar to a state-of-the-art multiscale method that was specially developed for Poisson data. Considering that the computational complexity of our method is orders of magnitude lower, it is a very competitive alternative. The proposed approach is particularly promising in the context of low signal intensities and/or large data sets. This is illustrated experimentally with the denoising of low-count fluorescence micrographs of a biological sample.
IEEE Transactions on Image Processing | 2008
Florian Luisier; Thierry Blu
We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURE-LET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Steins unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subband-dependent) vector-valued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the state-of-the-art multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions.
computer vision and pattern recognition | 2014
Shuang Wu; Sravanthi Bondugula; Florian Luisier; Xiaodan Zhuang; Pradeep Natarajan
Current state-of-the-art systems for visual content analysis require large training sets for each class of interest, and performance degrades rapidly with fewer examples. In this paper, we present a general framework for the zeroshot learning problem of performing high-level event detection with no training exemplars, using only textual descriptions. This task goes beyond the traditional zero-shot framework of adapting a given set of classes with training data to unseen classes. We leverage video and image collections with free-form text descriptions from widely available web sources to learn a large bank of concepts, in addition to using several off-the-shelf concept detectors, speech, and video text for representing videos. We utilize natural language processing technologies to generate event description features. The extracted features are then projected to a common high-dimensional space using text expansion, and similarity is computed in this space. We present extensive experimental results on the large TRECVID MED [26] corpus to demonstrate our approach. Our results show that the proposed concept detection methods significantly outperform current attribute classifiers such as Classemes [34], ObjectBank [21], and SUN attributes[28] . Further, we find that fusion, both within as well as between modalities, is crucial for optimal performance.
IEEE Transactions on Circuits and Systems for Video Technology | 2010
Florian Luisier; Thierry Blu; Michael Unser
We propose an efficient orthonormal wavelet-domain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Steins unbiased risk estimator-linear expansion of thresholds (SURE-LET) approach. To take full advantage of the strong spatio-temporal correlations of neighboring frames, a global motion compensation followed by a selective block-matching is first applied to adjacent frames, which increases their temporal correlations without distorting the interframe noise statistics. Then, a multiframe interscale wavelet thresholding is performed to denoise the current central frame. The simulations we made on standard grayscale video sequences for various noise levels demonstrate the efficiency of the proposed solution in reducing additive white Gaussian noise. Obtained at a lighter computational load, our results are even competitive with most state-of-the-art redundant wavelet-based techniques. By using a cycle-spinning strategy, our algorithm is in fact able to outperform these methods.
IEEE Transactions on Image Processing | 2013
Feng Xue; Florian Luisier; Thierry Blu
In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Steins unbiased risk estimate (SURE), which is a good estimate of the mean squared error. We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haar-wavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the Wiener-Haar-threshold basis that we consider. In addition, the proposed multi-Wiener SURE-LET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of nonlinear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms.
international symposium on biomedical imaging | 2008
Saskia Delpretti; Florian Luisier; Sathish Ramani; Thierry Blu; Michael Unser
Due to the random nature of photon emission and the various internal noise sources of the detectors, real timelapse fluorescence microscopy images are usually modeled as the sum of a Poisson process plus some Gaussian white noise. In this paper, we propose an adaptation of our SURE-LET denoising strategy to take advantage of the potentially strong similarities between adjacent frames of the observed image sequence. To stabilize the noise variance, we first apply the generalized Anscombe transform using suitable parameters automatically estimated from the observed data. With the proposed algorithm, we show that, in a reasonable computation time, real fluorescence timelapse microscopy images can be denoised with higher quality than conventional algorithms.
IEEE Transactions on Image Processing | 2012
Florian Luisier; Thierry Blu; Patrick J. Wolfe
In this paper, we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance (MR) image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and the other in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon most state-of-the-art methods for both simulated and actual MR image data.