Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Søren Forchhammer is active.

Publication


Featured researches published by Søren Forchhammer.


IEEE Transactions on Circuits and Systems for Video Technology | 1998

The emerging JBIG2 standard

Paul G. Howard; Faouzi Kossentini; Bo Martins; Søren Forchhammer; William J. Rucklidge

The Joint Bi-Level Image Experts Group (JBIG), an international study group affiliated with ISO/IEC and ITU-T, is in the process of drafting a new standard for lossy and lossless compression of bilevel images. The new standard, informally referred to as JBIG2, will support model-based coding for text and halftones to permit compression ratios up to three times those of existing standards for lossless compression. JBIG2 will also permit lossy preprocessing without specifying how it is to be done, In this case, compression ratios up to eight times those of existing standards may be obtained with imperceptible loss of quality. It is expected that JBIG2 will become an international standard by 2000.


IEEE Transactions on Image Processing | 1998

Tree coding of bilevel images

Bo Martins; Søren Forchhammer

Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can be investigated within reasonable time prior to generating the code. A number of general-purpose coders are constructed according to this principle. Rissanens one-pass algorithm, context, is presented in two modified versions. The baseline is proven to be a universal coder. The faster version, which is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding.


IEEE Transactions on Information Theory | 1999

Entropy bounds for constrained two-dimensional random fields

Søren Forchhammer; Jørn Justesen

The maximum entropy and thereby the capacity of two-dimensional (2-D) fields given by certain constraints on configurations is considered. Upper and lower bounds are derived. A new class of 2-D processes yielding good lower bounds is introduced. Asymptotically, the process achieves capacity for constraints with limited long-range effects. The processes are general and may also be applied to, e.g., data compression of digital images. Results are given for the binary hard square model, which is a 2-D run-length-limited model and some other 2-D models with simple constraints.


international conference on acoustics, speech, and signal processing | 2009

Improved virtual channel noise model for transform domain Wyner-Ziv video coding

Xin Huang; Søren Forchhammer

Distributed Video Coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the Rate-Distortion (RD) performance.


multimedia signal processing | 2008

Improved side information generation for Distributed Video Coding

Xin Huang; Søren Forchhammer

As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD improvements on the set of test sequences are observed.


Pattern Recognition | 2011

Edge-based compression of cartoon-like images with homogeneous diffusion

Markus Mainberger; Andrés Bruhn; Joachim Weickert; Søren Forchhammer

Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compression standards such as JBIG and state-of-the-art encoders such as PAQ. When decoding, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. For the discrete reconstruction problem, we prove existence and uniqueness and establish a maximum-minimum principle. Furthermore, we describe an efficient multigrid algorithm. The result is a simple codec that is able to encode and decode in real time. We show that for cartoon-like images this codec can outperform the JPEG standard and even its more advanced successor JPEG2000.


IEEE Photonics Technology Letters | 2014

Constellation Shaping for Fiber-Optic Channels With QAM and High Spectral Efficiency

Metodi Plamenov Yankov; Darko Zibar; Knud J. Larsen; Lars Porskjær Christensen; Søren Forchhammer

In this letter, the fiber-optic communication channel with a quadrature amplitude modulation (QAM) input constellation is treated. Using probabilistic shaping, we show that high-order QAM constellations can achieve and slightly exceed the lower bound on the channel capacity, set by ring constellations. We then propose a mapping function for turbo-coded bit-interleaved coded modulation based on optimization of the mutual information between the channel input and output. Using this mapping, spectral efficiency as high as 6.5 bits/s/Hz/polarization is achieved on a simulated single channel long-haul fiber-optical link excluding the pilot overhead, used for synchronization, and taking into account frequency and phase mismatch impairments, as well as laser phase noise and analog-to-digital conversion quantization impairments. The simulations suggest that major improvements can be expected in the achievable rates of optical networks with high-order QAM.


international conference on multimedia and expo | 2011

Efficient depth map compression exploiting segmented color data

Simone Milani; Pietro Zanuttigh; Marco Zamarin; Søren Forchhammer

3D video representations usually associate to each view a depth map with the corresponding geometric information. Many compression schemes have been proposed for multi-view video and for depth data, but the exploitation of the correlation between the two representations to enhance compression performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently accurate for the target bit rate, the surface coefficients are compressed and transmitted. Otherwise, the region is coded using a standard H.264/AVC Intra coder. Experimental results show that the proposed scheme permits to outperformthe standardH.264/AVC Intra codec on depth data and can be effectively included into multi-view plus depth compression schemes.


IEEE Transactions on Image Processing | 2004

Optimal context quantization in lossless compression of image data sequences

Søren Forchhammer; Xiaolin Wu; Jakob Dahl Andersen

In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to insufficient sample statistics of a given input image. We consider the problem of finding the optimal quantizer Q that quantizes the K-dimensional causal context C/sub t/=(X/sub t-t1/,X/sub t-t2/,...,X/sub t-tK/) of a source symbol X/sub t/ into one of a set of conditioning states. The optimality of context quantization is defined to be the minimum static or minimum adaptive code length of given a data set. For a binary source alphabet an optimal context quantizer can be computed exactly by a fast dynamic programming algorithm. Faster approximation solutions are also proposed. In case of m-ary source alphabet a random variable can be decomposed into a sequence of binary decisions, each of which is coded using optimal context quantization designed for the corresponding binary random variable. This optimized coding scheme is applied to digital maps and /spl alpha/-plane sequences. The proposed optimal context quantization technique can also be used to establish a lower bound on the achievable code length, and hence is a useful tool to evaluate the performance of existing heuristic context quantizers.


IEEE Transactions on Image Processing | 2002

Content layer progressive coding of digital maps

Søren Forchhammer; Ole Riis Jensen

A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images.

Collaboration


Dive into the Søren Forchhammer's collaboration.

Top Co-Authors

Avatar

Claire Mantel

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Jari Korhonen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Huynh Van Luong

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Metodi Plamenov Yankov

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Nino Burini

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Knud J. Larsen

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Ehsan Nadernejad

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Xin Huang

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Idelfonso Tafur Monroy

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jørn Justesen

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge