Peter De Neve
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter De Neve.
Journal of Electronic Imaging | 1999
Steven Van Assche; Koen Denecker; Peter De Neve; Wilfried Philips; Ignace Lemahieu
In the prepress industry, color images have both a high spatial and a high color resolution. Such images require a consider- able amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmis- sion problems. Because of the high quality requirements in the pre- press industry, mostly only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images are com- pressed independently. However, higher compression ratios can be achieved by exploiting intercolor redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: intercolor error prediction and a Karhunen-Loeve transform-based technique, which are both linear color decorrelation techniques, and interframe CALIC, which uses a nonlinear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. Surprisingly, the nonlinear interframe CALIC predictor does not yield better results.
Multimedia Systems | 2006
Robbie De Sutter; Sam Lerouge; Peter De Neve; Christian Timmerer; Hermann Hellwagner; Rik Van de Walle
More and more data are structured, stored, and sent over a network using the Extensible Markup Language (XML) language. There are, however, concerns about the verbosity of XML in such a way that it may restrain further adoption of the language, especially when exchanging XML-based data over heterogeneous networks, and when it is used within constrained (mobile) devices. Therefore, alternative (binary) serialization formats of the XML data become relevant in order to reduce this overhead. However, usingbinary-encoded XML should not introduce interoperability issues with existing applications nor add additional complexity to new applications. On top of that, it should have a clear cost reduction over the current plain-text serialization format. A first technology is developed within the ISO/IEC Moving Picture Experts Group, namely the Binary MPEG Format for XML. It provides good compression efficiency, ability to (partially) update existing XML trees, and facilitates random access into, and manipulation of, the binary-encoded bit stream. Another technique is based on the Abstract Syntax Notation One specification with the Packed Encoding Rules created by the ITU-T. This paper evaluates both techniques as alternative XML serialization formats and introduces a solution for the interoperability concerns. This solution and the alternative serialization formats are validated against two real-life use cases in terms of processing speed and cost reduction. The efficiency of the alternative serialization formats are compared to a classic plain text compression technique, in particular ZIP compression.
electronic imaging | 1998
Boris Rogge; Ignace Lemahieu; Wilfried Philips; Koen Denecker; Peter De Neve; Steven Van Assche
On the Internet, transmission time of large images is still an important issue. In order to reduce transmission time this paper introduces an efficient method to send 8-bit greyscale images across the Internet. The method allows progressive transmission up to lossless reconstruction. It also allows the user to select a region of interest. This method is particularly useful when image quality and transmission speed are two desired properties. The method uses TCP-IP as a transport protocol.
electronic imaging | 1998
Koen Denecker; Peter De Neve; Ignace Lemahieu
Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG achieve only moderate compression ratios because they do not fully take into account the special image characteristics. In this paper, we present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the classical halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the sorted autocorrelation function of a part of the image. We have experimented with classical halftones of different resolutions and sizes and screened under different angles as well as with stochastic halftones. For classical halftones, the global improvement with respect to JBIG in its best mode is about 30% to 50%; binary tree modeling increases this by another 5% to 10%. For stochastic halftones, the autocorrelation-based template gives no improvement, though an exhaustive search technique shows that even bigger improvements are feasible using the context modeling technique; introducing binary tree modeling increases the compression ratio with about 10%.
human vision and electronic imaging conference | 1999
Peter De Neve; Koen Denecker; Ignace Lemahieu
In todays digital prepress workflow images are most often sorted in the CMYK color representation. In the lossy compression of CMYK color imags, most techniques do not take the tonal correlation between the color channels into account or they are not able to perform a proper color decorrelation in four dimensions. In a first stage a compression method has been developed that takes this type of redundancy into account. The basic idea is to divide the image into blocks. The color information in those blocks is then transformed from the original CMYK color space into a decorrelated color space. In this new color space not all components are of the same magnitude so here the gain for compression purposes becomes clear. After the color transformation step any regular compression scheme meant to reduce the spatial redundancy can be applied. In this paper a more advanced approach for the utilization procedure in the compression algorithm is presented. The proposed scheme tries to control the quantization parameters differently for all blocks and color components. Therefore the influence on the CIELab (Delta) E measure is investigated when making a small shift in the four main directions of the decorrelated color space.
electronic imaging | 1998
Koen Denecker; Steven Van Assche; Peter De Neve; Ignace Lemahieu
Lossless image compression algorithms used in the prepress workflow suffer from the disadvantage that only moderate compression ratios can be achieved. Most lossy compression schemes achieve much higher compression ratios but there is no easy way to limit difference they introduce. Near- lossless image compression schemes are based on lossless techniques, but they give an opportunity to put constraints on the unavoidable pixel loss. The constraints are usually expressed in terms of differences within the individual CMYK separations and this error criterion does not match the human visual system. In this paper. we present a near- lossless image compression scheme which aims at limiting the pixel difference such as observed by the human visual system. It uses the subjectively equidistant CIEL*a*b*-space to express allowable color differences. Since the CMYK to CIEL*a*b* transform maps a 4D space onto a 3D space, singularities would occur resulting in a loss of the gray component replacement information; therefore an additional dimension is added. The error quantization is based on an estimated linearization of the CIEL*a*b* transform and on the singular value decomposition of the resulting Jacobian matrix. Experimental results on some representative CMYK test images show that the visual image quality is improved and that higher compression ratios can be achieved before the visual difference is detected by a human observer.
Proceedings of SPIE/Medical imaging 1998, Vol. 3335, Kim, Y., Mun, K.M. (ed.), San Diego, februari | 1998
Jeroen Van Overloop; Wim Van De Sype; Koen Denecker; Peter De Neve; Erik Sundermann; Ignace Lemahieu
To make the archival and transmission of medical images in PACS (Picture Archiving and Communication Systems) and teleradiology systems user-friendly and economically profitable, the adoption of an efficient compression technique is an important feature in the design of such systems. An important category of lossy compression techniques uses the wavelet transformation for decorrelation of the pixel values, prior to quantization and entropy coding. For the coding of sets of images, the images are mostly independently compressed with a two-dimensional compression scheme. In this way, one discards the similarity between adjacent slices. The aim of this paper is to compare the performance of some two- dimensional and three-dimensional implementations of wavelet compression techniques and investigate some design issues as decomposition depth, the choice of wavelet filters and entropy coding.
Digital Compression Technologies and Systems for Video Communications | 1996
Peter De Neve; Wilfried Philips; Jeroen Van Overloop; Ignace Lemahieu
The JPEG lossy compression technique in medical imagery has several disadvantages (at higher compression ratios), mainly due to block-distortion. We therefore investigated two methods, the lapped orthogonal transform (LOT) and the DCT/DST coder, for the use on medical image data. These techniques are block-based but they reduce the block- distortion by spreading it out over the entire image. These compression techniques were applied on four different types of medical images (MRI image, x-ray image, angiogram and CT- scan). They were then compared with results from JPEG and variable block size DCT coders. At a first stage, we determined the optimal block size for each image and for each technique. It was found that for a specific image, the optimal block size was independent of the different transform coders. For the x-ray image, the CT-scan and the angiogram an optimal block size of 32 by 32 was found, while for the MRI image the optimal block size was 16 by 16. Afterwards, for all images the rate-distortion curves of the different techniques were calculated, using the optimal block size. The overall conclusion from our experiments is that the LOT is the best transform among the ones being investigated for compressing medical images of many different kinds. However, JPEG should be used for very high image qualities, as it then requires almost the same bit rate as the LOT and as it requires fewer computations than the LOT technique.
Journal of Electronic Imaging | 1999
Koen Denecker; Steven Van Assche; Peter De Neve; Ignace Lemahieu
color imaging conference | 1997
Peter De Neve; Wilfried Philips; Koen Denecker; Ignace Lemahieu