Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel Ambroze is active.

Publication


Featured researches published by Marcel Ambroze.


IEEE Transactions on Information Theory | 2012

Addendum to “An Efficient Algorithm to Find All Small-Size Stopping Sets of Low-Density Parity-Check Matrices”

Eirik Rosnes; Øyvind Ytrehus; Marcel Ambroze; Martin Tomlinson

In an earlier transactions paper, Rosnes and Ytrehus presented an efficient algorithm for determining all stopping sets of low-density parity-check (LDPC) codes, up to a specified weight, and also gave results for a number of well-known codes including the family of IEEE 802.16e LDPC codes, commonly referred to as the WiMax codes. It is the purpose of this short paper to review the algorithm for determining the initial part of the stopping set weight spectrum (which includes the codeword weight spectrum), and to provide some improvements to the algorithm. As a consequence, complete stopping set weight spectra up to weight 32 (for selected IEEE 802.16e LDPC codes) can be provided, while in previous work only stopping set weights up to 28 are reported. In the published standard for the IEEE 802.16e codes there are two methods of construction presented, depending upon the code rate and the code length. We compare the stopping sets of the resulting codes and provide complete stopping set weight spectra (up to five terms) for all IEEE 802.16e LDPC codes using both construction methods.


International Symposium on VIPromCom Video/Image Processing and Multimedia Communications | 2002

Combating geometrical attacks in a DWT based blind video watermarking system

C.V. Serdean; Marcel Ambroze; Martin Tomlinson; Graham Wade

This paper describes a high capacity blind video watermarking system invariant to geometrical attacks such as shift, rotation, scaling and cropping. A spatial domain reference watermark is used to obtain invariance to geometric attacks by employing image registration techniques to determine and invert the attacks. A second, high capacity watermark, which carries the data payload, is embedded in the wavelet domain according to a human visual system (HVS) model. This is protected by a state-of-the-art error correction code (turbo code). The proposed system is invariant to scaling up to 180%, rotation up to 70/spl deg/ and arbitrary aspect ratio changes up to 200% on both axes. Furthermore, the system is virtually invariant to any shifting, cropping, or combined shifting and cropping.


IEEE Transactions on Information Theory | 2014

On the Minimum/Stopping Distance of Array Low-Density Parity-Check Codes

Eirik Rosnes; Marcel Ambroze; Martin Tomlinson

In this paper, we study the minimum/stopping distance of array low-density parity-check (LDPC) codes. An array LDPC code is a quasi-cyclic LDPC code specified by two integers q and m, where q is an odd prime and m q. In the literature, the minimum/stopping distance of these codes (denoted by d(q, m) and h(q, m), respectively) has been thoroughly studied for m 5. Both exact results, for small values of q and m, and general (i.e., independent of q) bounds have been established. For m = 6, the best known minimum distance upper bound, derived by Mittelholzer, is d(q, 6) 32. In this paper, we derive an improved upper bound of d(q, 6) 20 and a new upper bound d(q, 7) 24 by using the concept of a template support matrix of a codeword/stopping set. The bounds are tight with high probability in the sense that we have not been able to find codewords of strictly lower weight for several values of q using a minimum distance probabilistic algorithm. Finally, we provide new specific minimum/stopping distance results for m 7 and low-to-moderate values of q ≤79.


Iet Communications | 2007

Analysis of the distribution of the number of erasures correctable by a binary linear code and the link to low-weight codewords

Martin Tomlinson; Cen Jung Tjhai; Jing Cai; Marcel Ambroze

The number and weight of low-weight codewords of a binary linear code determine the erasure channel performance. Analysis is given of the probability density function of the number of erasures correctable by the code in terms of the weight enumerator polynomial. For finite-length codes, zero erasure decoder error rate is impossible, even with maximum-distance-separable (MDS) codes and maximum-likelihood decoding. However, for codes that have binomial weight spectra, for example BCH, Goppa and double-circulant codes, the erasure correction performance is close to that of MDS codes. One surprising result is that, for many (n, k) codes, the average number of correctable erasures is almost equal to n-k, which is significantly larger than d min -1. For the class of iteratively decodable codes (LDPC and turbo codes), the erasure performance is poor in comparison to algebraic codes designed for maximum d min . It is also shown that the turbo codes that have optimised d min have significantly better performance than LDPC codes. A probabilistic method, which has considerably smaller search space than that of the generator matrix-based methods, is presented to determine the d min of a linear code using random erasure patterns. Using this approach, it is shown that there are (168, 84, 24) and (216, 108, 24) quadratic double-circulant codes


international zurich seminar on digital communications | 2004

Approaching the ML performance with iterative decoding

Evangelos Papagiannis; Marcel Ambroze; M. Tomlinsom

The paper presents a method to significantly improve the convergence of iteratively decoded concatenated schemes and reduce the gap between iterative and maximum likelihood (ML) decoding. It is shown that many of the error blocks produced by the iterative decoder can be corrected by modifying a single critical coordinate (channel value) of the received vector and repeating the decoding. This is the basis of the RVCM (received vector coordinate modification) algorithm. Its description, performance and drawbacks are discussed later on. The paper also presents a practically obtained lower bound on ML performance based on the Euclidean distances of the transmitted and the iteratively decoded codewords from the received vector. At low SNR this bound is assuming an unrealistic perfect code, while at high SNR the approximations are getting closer to the real characteristics of the code and the RVCM iterative decoder is shown to achieve the ultimate ML performance.


international conference on conceptual structures | 2006

Some Results on the Weight Distributions of the Binary Double-Circulant Codes Based on Primes

Cen Jung Tjhai; Martin Tomlinson; R. Horan; Mohammed Zaki Ahmed; Marcel Ambroze

This paper presents a more efficient algorithm to count codewords of given weights in self-dual double-circulant and formally self-dual quadratic double-circulant codes over GF(2). A method of deducing the modular congruence of the weight distributions of the binary quadratic double-circulant codes is proposed. This method is based on that proposed by Mykkeltveit, Lam and McEliece, JPL. Tech. Rep., 1972, which was applied to the extended quadratic-residue codes. A useful application of this modular congruence method is to provide independent verification of the weight distributions of the extended quadratic-residue and quadratic double-circulant codes. Using this method in conjunction with the proposed efficient codeword counting algorithm, we are able i) to give the previously unpublished weight distributions of the [76, 38,12] and [124, 62, 20] binary quadratic double-circulant codes; ii) to provide corrections to the published results on the weight distributions of the binary extended quadratic-residue code of prime 151, and the number of codewords of weights 30 and 32 of the binary extended quadratic-residue code of prime 137; and iii) to prove that the [168, 84, 24] extended quadratic-residue and quadratic double-circulant codes are inequivalent


Iet Communications | 2014

Best binary equivocation code construction for syndrome coding

Ke Zhang; Martin Tomlinson; Mohammed Zaki Ahmed; Marcel Ambroze; Miguel R. D. Rodrigues

Traditionally, codes are designed for an error correcting system to combat noisy transmission channels and achieve reliable communication. These codes can be used in syndrome coding, but it is shown in this study that the best performance is achieved with codes specifically designed for syndrome coding. In the view of the security of the communication, the best codes are the codes, which have the highest value of an information secrecy metric, the equivocation rate, for a given code length and code rate and are well packed codes. A code design technique is described, which produces the best binary linear codes for the syndrome coding scheme. An efficient recursive method to determine the equivocation rate for the binary symmetric channel and any linear binary code is also presented. A large online database of best equivocation codes for the syndrome coding scheme has been produced using the code design technique with some examples presented in the study. The presented results show that the best equivocation codes produce a higher level of secrecy for the syndrome coding scheme than almost all best known error correcting codes. Interestingly, it is unveiled that some outstanding best known error correcting codes are also best equivocation codes.


wireless communications, networking and information security | 2010

Decoding low-density parity-check codes with error-floor free over the AWGN channel

Li Yang; Martin Tomlinson; Marcel Ambroze

We propose a new soft decision decoding arrangement for LDPC codes over the AWGN channel with error-floor free. The iterative belief propagation decoder is used as the initial decoder with the iterative output conditioned prior to OSD decoding. Improved results are obtained to break the corresponding error floors caused by the stopping sets. The basis of the conditioning of the iterative output is explained with supporting analysis. Some practical examples of performance are presented for some well known LDPC codes and it is shown that the proposed decoder with OSD-i does not only produce better results than a stand-alone OSD-(i + 1) decoder with considerable reduction in decoder complexity, but also guarantees the error-floor free.


Iet Communications | 2007

Extending the Dorsch decoder towards achieving maximum-likelihood decoding for linear codes

Martin Tomlinson; Cen Jung Tjhai; Marcel Ambroze

It is shown that the relatively unknown Dorsch decoder may be extended to produce a decoder that is capable of maximum-likelihood decoding. The extension involves a technique for any linear (n, k) code that ensures that n−k less reliable, soft decisions of each received vector may be treated as erasures in determining candidate codewords. These codewords are derived from low information weight codewords and it is shown that an upper bound of this information weight may be calculated from each received vector in order to guarantee that the decoder will achieve maximum-likelihood decoding. Using the cross-correlation function, it is shown that the most likely codeword may be derived from a partial correlation function of these low information weight codewords, which leads to an efficient fast decoder. For a practical implementation, this decoder may be further simplified into a concatenation of a hard-decision decoder and a partial correlation decoder with insignificant performance degradation. Results are presented for some powerful, known codes, including a GF(4) non-binary BCH code. It is shown that maximum-likelihood decoding is realised for a high percentage of decoded codewords and that performance close to the sphere packing bound is attainable for codeword lengths up to 1000 bits.


international conference on multimedia and expo | 2002

Adding robustness to geometrical attacks to a wavelet based, blind video watermarking system

C.V. Serdean; Marcel Ambroze; Martin Tomlinson; J.G. Wade

This paper describes a high capacity blind video watermarking system invariant to geometrical attacks such as shift, rotation, scaling and cropping. A spatial domain reference watermark is used to obtain invariance to geometric attacks by employing image registration techniques to determine and invert the attacks. A second, high capacity, watermark, which carries the data payload, is embedded in the wavelet domain according to a human visual system (HVS) model. This is protected by a state-of-the-art error correction code (turbo code). For a false detection probability of 10/sup -8/, the proposed system is invariant to scaling up to 180%, rotation up to 70/spl deg/, and arbitrary aspect ratio changes up to 200% on both axes. Furthermore, the system is virtually invariant to any shifting, cropping, or combined shifting and cropping attack, and it is robust to MPEG-2 compression as low as 2-3 Mbps.

Collaboration


Dive into the Marcel Ambroze's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayad Al-Adhami

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Xu

Plymouth State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge