Murat Kunt
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Murat Kunt.
Proceedings of the IEEE | 1985
Murat Kunt; A. Ikonomopoulos; Michel Kocher
The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio, starting at 1 with the first digital picture in the early 1960s, reached a saturation level around 10:1 a couple of years ago. This certainly does not mean that the upper bound given by the entropy of the source has also been reached. First, this entropy is not known and depends heavily on the model used for the source, i.e., the digital image. Second, the information theory does not take into account what the human eye sees and how it sees. Recent progress in the study of the brain mechanism of vision has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 70:1. Image quality, of course, remains as an important problem to be investigated. This class of methods, that we call second generation, is the subject of this paper. Two groups can be formed in this class: methods using local operators and combining their output in a suitable way and methods using contour-texture descriptions. Four methods, two in each class, are described in detail. They are applied to the same set of original pictures to allow a fair comparison of the quality in the decoded pictures. If more effort is devoted to this subject, a compression ratio of 100:1 is within reach.
IEEE Transactions on Circuits and Systems | 1987
Murat Kunt; Michel Benard; Riccardo Leonardi
The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number as much as possible, and to reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau of about 10:1 several years ago. Recent progress in the study of the brain mechanism of vision and of scene analysis has opened new vistas in picture coding. The concept of directional sensitivity of neurones in the visual cortex combined with the separate processing of contours and textures has led to a new class of coding methods, called second generation, capable of achieving compression ratios as high as 100:1. In this paper, recent results on object-based coding methods are reported, exhibiting improvements in the previous second-generation methods.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998
Fabrice Moscheni; Sushil K. Bhattacharjee; Murat Kunt
This paper proposes a technique for spatio-temporal segmentation to identify the objects present in the scene represented in a video sequence. This technique processes two consecutive frames at a time. A region-merging approach is used to identify the objects in the scene. Starting from an oversegmentation of the current frame, the objects are formed by iteratively merging regions together. Regions are merged based on their mutual spatio-temporal similarity. We propose a modified Kolmogorov-Smirnov test for estimating the temporal similarity. The region-merging process is based on a weighted, directed graph. Two complementary graph-based clustering rules are proposed, namely, the strong rule and the weak rule. These rules take advantage of the natural structures present in the graph. Experimental results on different types of scenes demonstrate the ability of the proposed technique to automatically partition the scene into its constituent objects.
IEEE Transactions on Circuits and Systems for Video Technology | 1998
Roberto Castagno; Touradj Ebrahimi; Murat Kunt
We present a scheme for interactive video segmentation. A key feature of the system is the distinction between two levels of segmentation, namely, regions and object segmentation. Regions are homogeneous areas of the images, which are extracted automatically by the computer. Semantically meaningful objects are obtained through user interaction by grouping of regions according to the specific application. This splitting relieves the computer of ill-posed semantic problems, and allows a higher level of flexibility of the method. The extraction of regions is based on the multidimensional analysis of several image features by a spatially constrained fuzzy C-means algorithm. The local level of reliability of the different features is taken into account in order to adaptively weight the contribution of each feature to the segmentation process. Results on the extraction of regions as well as on the tracking of spatiotemporal objects are presented.
IEEE Transactions on Image Processing | 2001
Julien Reichel; Gloria Menegaz; Marcus J. Nadenau; Murat Kunt
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.
Proceedings of the IEEE | 1995
Olivier Egger; Wei Li; Murat Kunt
A morphological subband decomposition with perfect reconstruction is proposed. Critical subsampling is achieved. The reconstructed images using this decomposition do not suffer from any ringing effect. In order to avoid poor texture representation by the morphological filters an adaptive subband decomposition is introduced. It chooses linear filters on textured regions and morphological filters otherwise. A simple and efficient texture detection criterion is proposed and applied to the adaptive decomposition. Comparisons to other coding techniques such as JPEG and linear subband coding show that the proposed scheme performs significantly better both in terms of PSNR and visual quality. >
Computers and Biomedical Research | 1983
Adriaan Ligtenberg; Murat Kunt
The success of automated patient monitoring is primarily dependent on the ability to detect normal as well as bizarre QRS-complexes. In this paper a new robust single lead QRS-detection algorithm is presented. The QRS detector can be separated into five different blocks, namely, a noise filter, a differentiator, an energy collector, a minimal distance classifier, and a minimax trimmer. Each block accounts for some characteristic features of QRS complexes (steepness, duration, etc.). The time delay introduced by this algorithm for making a decision is less then one second, allowing real-time applications. The performance of the QRS detector is analyzed and results are presented.
Proceedings of the IEEE | 1999
Olivier Egger; Pascal Fleury; Touradj Ebrahimi; Murat Kunt
Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful pairs. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must he defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare state-of-the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated.
IEEE Transactions on Signal Processing | 2004
Pascal Frossard; Pierre Vandergheynst; R.M. Figueras i Ventura; Murat Kunt
This paper proposes a rate-distortion optimal a posteriori quantization scheme for matching pursuit (MP) coefficients. The a posteriori quantization applies to an MP expansion that has been generated offline and cannot benefit of any feedback loop to the encoder in order to compensate for the quantization noise. The redundancy of the MP dictionary provides an indicator of the relative importance of coefficients and atom indices and, subsequently, on the quantization error. It is used to define a universal upper bound on the decay of the coefficients, sorted in decreasing order of magnitude. A new quantization scheme is then derived, where this bound is used as an Oracle for the design of an optimal a posteriori quantizer. The latter turns the exponentially distributed coefficient entropy-constrained quantization problem into a simple uniform quantization problem. Using simulations with random dictionaries, we show that the proposed exponentially upper bounded quantization (EUQ) clearly outperforms classical schemes. Stepping on the ideal Oracle-based approach, a suboptimal adaptive scheme is then designed that approximates the EUQ but still outperforms competing quantization methods in terms of rate-distortion characteristics. Finally, the proposed quantization method is studied in the context of image coding. It performs similarly to state-of-the-art coding methods (and even better at low rates) while interestingly providing a progressive stream that is very easy to transcode and adapt to changing rate constraints.
Signal Processing | 1985
A. Ikonomopoulos; Murat Kunt
Abstract A new image coding technique is presented as derived from an image decomposition into a low frequency component and many high frequency directional components. The directional filters and their properties are introduced. Then the implementation of the directional decomposition and the selection of the information to be coded are described. The combination of transform domain coding of the low frequency component and spatial domain coding of the directional components led to acceptable results with compression ratios higher than 30 to 1.