Onur G. Guleryuz
LG Electronics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Onur G. Guleryuz.
IEEE Signal Processing Letters | 1996
Zixiang Xiong; Onur G. Guleryuz; Michael T. Orchard
Since Shapiro (see ibid., vol.41, no.12, p. 445, 1993) published his work on embedded zerotree wavelet (EZW) image coding, there have been increased research activities in image coding centered around wavelets. We first point out that the wavelet transform is just one member in a family of linear transformations, and the discrete cosine transform (DCT) can also be coupled with an embedded zerotree quantizer. We then present such an image coder that outperforms any other DCT-based coder published in the literature, including that of the Joint Photographers Expert Group (JPEG). Moreover, our DCT-based embedded image coder gives higher peak signal-to-noise ratios (PSNR) than the quoted results of Shapiros EZW coder.
IEEE Transactions on Image Processing | 2006
Onur G. Guleryuz
We study the robust estimation of missing regions in images and video using adaptive, sparse reconstructions. Our primary application is on missing regions of pixels containing textures, edges, and other image features that are not readily handled by prevalent estimation and recovery algorithms. We assume that we are given a linear transform that is expected to provide sparse decompositions over missing regions such that a portion of the transform coefficients over missing regions are zero or close to zero. We adaptively determine these small magnitude coefficients through thresholding, establish sparsity constraints, and estimate missing regions in images using information surrounding these regions. Unlike prevalent algorithms, our approach does not necessitate any complex preconditioning, segmentation, or edge detection steps, and it can be written as a sequence of denoising operations. We show that the region types we can effectively estimate in a mean-squared error sense are those for which the given transform provides a close approximation using sparse nonlinear approximants. We show the nature of the constructed estimators and how these estimators relate to the utilized transform and its sparsity over regions of interest. The developed estimation framework is general, and can readily be applied to other nonstationary signals with a suitable choice of linear transforms. Part I discusses fundamental issues, and Part II is devoted to adaptive algorithms with extensive simulation examples that demonstrate the power of the proposed techniques.
IEEE Transactions on Information Theory | 2002
Albert Cohen; Ingrid Daubechies; Onur G. Guleryuz; Michael T. Orchard
This paper provides a mathematical analysis of transform compression in its relationship to linear and nonlinear approximation theory. Contrasting linear and nonlinear approximation spaces, we show that there are interesting classes of functions/random processes which are much more compactly represented by wavelet-based nonlinear approximation. These classes include locally smooth signals that have singularities, and provide a model for many signals encountered in practice, in particular for images. However, we also show that nonlinear approximation results do not always translate to efficient compress on strategies in a rate-distortion sense. Based on this observation, we construct compression techniques and formulate the family of functions/stochastic processes for which they provide efficient descriptions in a rate-distortion sense. We show that this family invariably leads to Besov spaces, yielding a natural relationship among Besov smoothness, linear/nonlinear approximation order, and compression performance in a rate-distortion sense. The designed compression techniques show similarities to modern high-performance transform codecs, allowing us to establish relevant rate-distortion estimates and identify performance limits.
IEEE Transactions on Image Processing | 2006
Onur G. Guleryuz
We combine the main ideas introduced in Part I with adaptive techniques to arrive at a powerful algorithm that estimates missing data in nonstationary signals. The proposed approach operates automatically based on a chosen linear transform that is expected to provide sparse decompositions over missing regions such that a portion of the transform coefficients over missing regions are zero or close to zero. Unlike prevalent algorithms, our method does not necessitate any complex preconditioning, segmentation, or edge detection steps, and it can be written as a progression of denoising operations. We show that constructing estimates based on nonlinear approximants is fundamentally a nonconvex problem and we propose a progressive algorithm that is designed to deal with this issue directly. The algorithm is applied to images through an extensive set of simulation examples, primarily on missing regions containing textures, edges, and other image features that are not readily handled by established estimation and recovery methods. We discuss the properties required of good transforms, and in conjunction, show the types of regions over which well-known transforms provide good predictors. We further discuss extensions of the algorithm where the utilized transforms are also chosen adaptively, where unpredictable signal components in the progressions are identified and not predicted, and where the prediction scenario is more general.
IEEE Transactions on Image Processing | 2007
Onur G. Guleryuz
We consider the scenario where additive, independent, and identically distributed (i.i.d) noise in an image is removed using an overcomplete set of linear transforms and thresholding. Rather than the standard approach, where one obtains the denoised signal by ad hoc averaging of the denoised estimates provided by denoising with each of the transforms, we formulate the optimal combination as a conditional linear estimation problem and solve it for optimal estimates. Our approach is independent of the utilized transforms and the thresholding scheme, and as we illustrate using oracle-based denoisers, it extends established work by exploiting a separate degree of freedom that is, in general, not reachable using previous techniques. Our derivation of the optimal estimates specifically relies on the assumption that the utilized transforms provide sparse decompositions. At the same time, our work is robust as it does not require any assumptions about image statistics beyond sparsity. Unlike existing work, which tries to devise ever more sophisticated transforms and thresholding algorithms to deal with the myriad types of image singularities, our work uses basic tools to obtain very high performance on singularities by taking better advantage of the sparsity that surrounds them. With well-established transforms, we obtain results that are competitive with state-of-the-art methods.
asilomar conference on signals, systems and computers | 2003
Onur G. Guleryuz
We consider the familiar scenario where independent and identically distributed (i.i.d) noise in an image is removed using a set of overcomplete linear transforms and thresholding. Rather than the standard approach where one obtains the denoised signal by ad hoc averaging of the denoised estimates (corresponding to each transform), we formulate the optimal combination as a linear estimation problem for each pixel and solve it for optimal estimates. Our approach is independent of the utilized transforms and the thresholding scheme, and extends established work by exploiting a separate degree of freedom that is in general not reachable using previous techniques. Surprisingly, our derivation of the optimal estimates does not require explicit image statistics but relies solely on the assumption that the utilized transforms provide sparse decompositions. Yet it can be seen that our adaptive estimates utilize implicit conditional statistics and they make the biggest impact around edges and singularities where standard sparsity assumptions fail.
international conference on image processing | 2008
Osman Gokhan Sezer; Oztan Harmanci; Onur G. Guleryuz
We propose a block-based transform optimization and associated image compression technique that exploits regularity along directional image singularities. Unlike established work, directionality comes about as a byproduct of the proposed optimization rather than a built in constraint. Our work classifies image blocks and uses transforms that are optimal for each class, thereby decomposing image information into classification and transform coefficient information. The transforms are optimized using a set of training images. Our algebraic framework allows straightforward extension to non-block transforms, allowing us to also design sparse lapped transforms that exploit geometric regularity. We use an EZW/SPIHT like entropy coder to encode the transform coefficients to show that our block and lapped designs have competitive rate-distortion performance. Our work can be seen as nonlinear approximation optimized transform coding of images subject to structural constraints on transform basis functions.
data compression conference | 2002
Onur G. Guleryuz
We propose an algorithm for image recovery where completely lost blocks in an image/video-frame are recovered using spatial information surrounding these blocks. Our primary application is on lost regions of pixels containing textures, edges and other image features that are not readily handled by prevalent recovery and error concealment algorithms. The proposed algorithm is based on the iterative application of a generic denoising algorithm and it does not necessitate any complex preconditioning, segmentation, or edge detection steps. Utilizing locally sparse linear transforms and overcomplete denoising, we obtain good PSNR performance in the recovery of such regions. In addition to results on image recovery, the paper provides further insights into the usefulness of popular transforms like wavelets, wavelet packets, discrete cosine transform (DCT) and complex wavelets in providing sparse image representations.
ITCom 2001: International Symposium on the Convergence of IT and Communications | 2001
Viresh Ratnakar; Onur G. Guleryuz
We present standards-compliant visible watermarking schemes for digital images and video in DCT-based compressed formats. The watermarked data is in the same compressed format as the original and can be viewed with standard tools and applications. Moreover, for most of the schemes presented, the watermarked data has exactly the same compressed size as the original. The watermark can be inserted and removed using a key for applications requiring content protection. The watermark application and removal algorithms are very efficient and exploit some features of compressed data formats (such as JPEG and MPEG) which allow most of the work to be done in the compressed domain.
international conference on image processing | 2005
Onur G. Guleryuz
We propose a high-performance, nonlinear, loop filter that reduces quantization noise over video frames composed of locally uniform regions (smooth, high frequency, texture, etc.) separated by singularities. Unlike earlier work, the designed filter is not limited to block based transform coders and provides a robust solution for intra as well as differentially encoded frames. Our formulation based on sparse decompositions allows us to take advantage of the spatial dependencies inherent in video frames while incorporating temporal dependencies through the information provided by previously decoded frames. The proposed filter results in 10% improvements in rate (at typical distortions) in combination with significant visual quality improvements, especially around singularities.