Joël Jung
Philips
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joël Jung.
IEEE Transactions on Circuits and Systems for Video Technology | 2008
Guillaume Laroche; Joël Jung; Béatrice Pesquet-Popescu
The H.264/MPEG4-AVC video coding standard has achieved a higher coding efficiency compared to its predecessors. The significant bitrate reduction is mainly obtained by efficient motion compensation tools, as variable block sizes, multiple reference frames, 1/4-pel motion accuracy and powerful prediction modes (e.g., SKIP and DIRECT). These tools have contributed to an increased proportion of the motion information in the total bit- stream. To achieve the performance required by the future ITU-T challenge, namely to provide a codec with 50% bitrate reduction compared to the current H.264, the reduction of this motion information cost is essential. This paper proposes a competing framework for better motion vector coding and SKIP mode. The predictors for the SKIP mode and the motion vector predictors are optimally selected by a rate-distortion criterion. These methods take advantage from the use of the spatial and the temporal redundancies in the motion vector fields, where the simple spatial median usually fails. An adaptation of the temporal predictors according to the temporal distances between motion vector fields is also described for multiple reference frames and B-slices options. These two combined schemes lead to a systematic bitrate saving on Baseline and High profile, compared to an H.264/MPEG4-AVC standard codec, which reaches up to 45%.
IEEE Transactions on Circuits and Systems for Video Technology | 2014
Elie Gabriel Mora; Joël Jung; Marco Cagnazzo; Béatrice Pesquet-Popescu
The 3D video extension of High Efficiency Video Coding (3D-HEVC) exploits texture-depth redundancies in 3D videos using intercomponent coding tools. It also inherits the same quadtree coding structure as HEVC for both components. The current software implementation of 3D-HEVC includes encoder shortcuts that speed up the quadtree construction process, but those are always accompanied by coding losses. Furthermore, since the texture and its associated depth represent the same scene, at the same time instant and view point, their quadtrees are closely linked. In this paper, an intercomponent tool is proposed in which this link is exploited to save both runtime and bits through a joint coding of the quadtrees. If depth is coded before the texture, the texture quadtree is initialized from the coded depth quadtree. Otherwise, the depth quadtree is limited to the coded texture quadtree. A 31% encoder runtime saving, a -0.3% gain for coded and synthesized views and a -1.8% gain for coded views are reported for the second method.
IEEE Transactions on Circuits and Systems for Video Technology | 2011
Jean-Marc Thiesse; Joël Jung; Marc Antonini
New standardization activities have been recently launched by the JCT-VC experts group in order to challenge the current video compression standard H.264/AVC. Several improvements of this standard, previously integrated in the JM key technical area software, are already known and gathered in the high efficiency video coding test model. In particular, competition-based motion vector prediction has proved its efficiency. However, the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this motion vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, an advanced study of data hiding schemes is first performed. Then, an original way of using data hiding for video compression is proposed. The main idea of this paper is to hide the competition index into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the transform coefficients modification is performed via a rate-distortion optimization. The proposed scheme is evaluated on several low and high resolution sequences. Objective improvements (up to 2.40% bitrate saving) and subjective assessment of the chroma loss are reported.
multimedia signal processing | 2010
Jean-Marc Thiesse; Joël Jung; Marc Antonini
2010 appears to be the launching date for new compression activities intended to challenge the current video compression standard H.264/AVC. Several improvements of this standard are already known like competition-based motion vector prediction. However the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, a study of data hiding is first performed. Then, an efficient way of using data hiding for video compression is proposed. The main idea is to hide the indices into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the modification is performed via a rate-distortion optimization. Objective improvements (up to 2.3% bitrate saving) and subjective assess ment of chroma loss are reported and analyzed for several sequences.
international conference on image processing | 2010
Jean-Marc Thiesse; Joël Jung; Marc Antonini
New activities have been recently launched in order to challenge the H.264/AVC standard. Several improvements of this standard are already known, however the targeted 50% bitrate saving for equivalent quality is not yet achieved.
Signal Processing-image Communication | 2015
Antoine Dricot; Joël Jung; Marco Cagnazzo; Béatrice Pesquet; Frederic Dufaux; Péter Tamás Kovács; Vamsi Kiran Adhikarla
Super Multi-View (SMV) video content is composed of tens or hundreds of views that provide a light-field representation of a scene. This representation allows a glass-free visualization and eliminates many causes of discomfort existing in current available 3D video technologies. Efficient video compression of SMV content is a key factor for enabling future 3D video services. This paper first compares several coding configurations for SMV content and several inter-view prediction structures are also tested and compared. The experiments mainly suggest that large differences in coding efficiency can be observed from one configuration to another. Several ratios for the number of coded and synthesized views are compared, both objectively and subjectively. It is reported that view synthesis significantly affects the coding scheme. The amount of views to skip highly depends on the sequence and on the quality of the associated depth maps. Reported ranges of bitrates required to obtain a good quality for the tested SMV content are realistic and coherent with future 4K/8K needs. The reliability of the PSNR metric for SMV content is also studied. Objective and subjective results show that PSNR is able to reflect increase or decrease in subjective quality even in the presence of synthesized views. However, depending on the ratio of coded and synthesized views, the order of magnitude of the effective quality variation is biased by PSNR. Results indicate that PSNR is less tolerant to view synthesis artifacts than human viewers. Finally, preliminary observations are initiated. First, the light-field conversion step does not seem to alter the objective results for compression. Secondly, the motion parallax does not seem to be impacted by specific compression artifacts. The perception of the motion parallax is only altered by variations of the typical compression artifacts along the viewing angle, in cases where the subjective image quality is already low. To the best of our knowledge, this paper is the first to carry out subjective experiments and to report results of SMV compression for light-field 3D displays. It provides first results showing that improvement of compression efficiency is required, as well as depth estimation and view synthesis algorithms improvement, but that the use of SMV appears realistic according to next generation compression technology requirements. HighlightsStudy of the impact of compression on subjective quality for lightfield SMV content.To the best of our knowledge, this paper is the first to report results of this kind.Several SMV coding configurations are compared both objectively and subjectively.Compression efficiency, depth estimation and view synthesis require improvements.SMV appears realistic according to next generation compression technology requirements.
european signal processing conference | 2015
Antoine Dricot; Joël Jung; Marco Cagnazzo; Béatrice Pesquet; Frederic Dufaux
Integral imaging is a glasses-free 3D video technology that captures a light-field representation of a scene. This representation eliminates many of the limitations of current stereoscopic and autostereoscopic techniques. However, integral images have a large resolution and a structure based on microimages which is challenging to encode. In this paper a compression scheme for integral images based on view extraction is proposed. Average BD-rate gains of 15.7% and up to 31.3% are reported over HEVC. Parameters of the proposed coding scheme can take a large range of values. Results are first provided with an exhaustive search of the best configuration. Then an RD criterion is proposed to avoid exhaustive search methods, saving runtime while preserving the gains. Finally, additional runtime savings are reported by exploring how the different parameters interact.
IEEE Transactions on Circuits and Systems for Video Technology | 2010
Guillaume Laroche; Joël Jung; Béatrice Pesquet-Popescu
In a typical competition-based coding, the pertinence of a prediction mode does not only depend on its own efficiency but also on the fact that it is complementary with the other modes. The method proposed in this paper to improve the intra coding of the H.264/AVC standard relies on this remark; it shows how the cost of signaling predictors that are quite similar can be avoided. Indeed, at low bitrates, the information related to the predictor signaling in intra coding reaches up to 25% of the total bitrate for the whole set of standard VCEG test sequences. In order to reduce this cost, a method reproducible at the decoder side is proposed to eliminate some predictors from the intra predictor set. The proposed method exploits the proximity of the predictors in the transform domain in order to obtain a representative and non-redundant set of predictors.
picture coding symposium | 2009
Silvia Corrado; Marie Andrée Agostini; Marco Cagnazzo; Marc Antonini; Guillaume Laroche; Joël Jung
The coding resources used for motion vectors (MVs) can attain quite high ratios even in the case of efficient video coders like H.264, and this can easily lead to suboptimal rate-distortion performance. In a previous paper, we proposed a new coding mode for H.264 based on the quantization of motion vectors (QMV). We only considered the case of 16x16 partitions for motion estimation and compensation. That method allowed us to obtain an improved trade-off in the resource allocation between vectors and coefficients, and to achieve better rate-distortion performances with respect to H.264. In this paper, we build on the proposed QMV coding mode, extending it to the case of macroblock partition into smaller blocks. This issue requires solving some problems mainly related to the motion vector coding. We show how this task can be performed efficiently in our framework, obtaining further improvements over the standard coding technique.
electronic imaging | 2016
Gauthier Lafruit; Marek Domanski; Krzysztof Wegner; Tomasz Grajek; Takanori Senoh; Joël Jung; Péter Tamás Kovács; Patrik Goorts; Lode Jorissen; Adrian Munteanu; Beerend Ceulemans; Pablo Carballeira; Sergio García; Masayuki Tanimoto
ISO/IEC MPEG and ITU-T VCEG have recently jointly issued a new multiview video compression standard, called 3D-HEVC, which reaches unpreceded compression performances for linear,dense camera arrangements. In view of supporting future highquality,auto-stereoscopic 3D displays and Free Navigation virtual/augmented reality applications with sparse, arbitrarily arranged camera setups, innovative depth estimation and virtual view synthesis techniques with global optimizations over all camera views should be developed. Preliminary studies in response to the MPEG-FTV (Free viewpoint TV) Call for Evidence suggest these targets are within reach, with at least 6% bitrate gains over 3DHEVC technology.