Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Euee S. Jang is active.

Publication


Featured researches published by Euee S. Jang.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

Interpolator data compression for MPEG-4 animation

Euee S. Jang; James D. K. Kim; Seok Yoon Jung; Mahn-Jin Han; Sang Oak Woo; Shin-Jun Lee

Interpolator representation in key-frame animation is now the most popular method for computer animation. The interpolator data consist of key and key value pairs, where a key is a time stamp and a key value is the corresponding value to the key. In this paper, we propose a set of new technologies to compress the interpolator data. The performance of the proposed technique is compared with the existing MPEG-4 generic compression tool. Throughout the core experiments in MPEG-4, the proposed technique showed its superiority over the existing tool, becoming a part of MPEG-4 standard within the Animation Framework eXtension framework.


Optical Engineering | 2012

Fast coding unit decision method based on coding tree pruning for high efficiency video coding

Kiho Choi; Euee S. Jang

A fast coding unit (CU) decision method is proposed for high efficiency video coding (HEVC) by determining early the CU sizes based on coding tree pruning. One of the most effective, a newly introduced concept in HEVC, is variable CU size. In determining the best CU size, the reference encoder of the HEVC tests every possible CU size in order to estimate the coding performance of each CU defined by the CU size. This causes major computational complexity within the encoding process, which should be overcome for the implementation of a fast encoder. A simple tree-pruning algorithm is proposed that exploits the observation where the subtree computations can be skipped if the coding mode of the current node is sufficient (e.g., SKIP mode). The experimental results show that the proposed method was able to achieve a 40% reduction in encoding time compared to the HEVC test model 3.0 encoder with only a negligible loss in coding performance. The proposed method was adopted in HEVC test model 4.0 encoder at JCT-VC 6th meeting.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Algorithm/Architecture Co-Exploration of Visual Computing on Emergent Platforms: Overview and Future Prospects

Gwo Giun Lee; Yen-Kuang Chen; Marco Mattavelli; Euee S. Jang

Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

An introduction to the MPEG-4 animation framework eXtension

Mikaël Bourges-Sévenier; Euee S. Jang

This paper presents the MPEG-4 Animation Framework eXtension (AFX) standard, ISO/IEC 14496-16. Initiated by the MPEG Synthetic/Natural Hybrid Coding group in 2000, MPEG-4 AFX proposes an advanced framework for interactive multimedia applications using both natural and synthetic objects. Following this model, new synthetic objects have been specified, increasing content realism over existing MPEG-4 synthetic objects. The general overview of MPEG-4 AFX is provided on top of the review of MPEG-4 standards to explain the relationship between MPEG-4 and MPEG-4 AFX. Then we give a birds-eye view of new tools available in this standard.


international conference on image processing | 2002

Animation data compression in MPEG-4: interpolators

James D. K. Kim; Seok Yoon Jung; Mahn-Jin Han; Euee S. Jang; Sang Oak Woo; Shin Jun Lee; Gyeong Ja Jang

In this paper, we propose new technologies to compress the interpolator nodes in VRML/MPEG-4 BIFS (binary format for scene). Interpolators are used to animate the 3D objects in a VRML/MPEG-4 BIFS scene. For interactive applications with quite a few animated 3D synthetic objects, two components are dominant in the amount of data to represent the animation: 3D object and its animation. In the current MPEG-4, 3D model coding (3DMC) yields a high compression ratio with reasonable quality for the 3D object, while predictive MFField coding (PMFC) yields a moderate compression ratio with reasonable quality for the animation data. The proposed techniques are to provide a high compression on animation data and have been adopted to MPEG-4 system amendment 4 (AFX/MUW).


IEEE Transactions on Consumer Electronics | 2010

Zero coefficient-aware IDCT algorithm for fast video decoding

Kiho Choi; Sunyoung Lee; Euee S. Jang

Ever since many well-known image and video coding standards such as JPEG, MPEG, and H.26x started using DCT as a core process in data compression, the design of the fast inverse discrete cosine transform (IDCT) has been an intensive research topic. Most research has focused on a reduction of the number of operations using the butterfly structure. However, the majority of DCT coefficients is zero after quantization and is therefore redundant for IDCT computation. Therefore, we exploited this DCT coefficients redundancy to propose a zero coefficient-aware IDCT algorithm for fast decoding. The proposed method significantly reduces the number of IDCT operations by adaptively including non-zero coefficients in the calculation and employing a table look-up to eliminate multiplication operations in the IDCT process. The proposed zero coefficient-ware algorithm outperformed other existing fast IDCT algorithms in terms of operational complexity. Moreover, the running time was faster than butterfly based IDCT algorithms implemented in MPEG-4 simple profile decoder by a speedup factor of 1.32 times for the SIF/CIF sequences and up to 2.18 times for the HD sequences.


Signal Processing-image Communication | 2013

Reconfigurable media coding: An overview

Euee S. Jang; Marco Mattavelli; Marius Preda; Mickaël Raulet; Huifang Sun

This paper provides an overview of the rationale of the Reconfigurable Media Coding framework developed by MPEG standardization committee to overcome the limits of traditional ways of providing decoder specifications. Such framework is an extension of the Reconfigurable Video coding framework now encompassing also 3D Graphics coding standard. The idea of this approach is to specify decoders using an actor dataflow based representation consisting of self-contained processing units (coding tools) connected altogether and communicating by explicitly exchanging data. Such representation provides a specification for which several properties of the algorithms interesting for codec implementations are explicitly exposed and can be used for exploring different implementation objectives.


IEEE MultiMedia | 2012

Leveraging Parallel Computing in Modern Video Coding Standards

Kiho Choi; Euee S. Jang

Video coding has always been a computationally intensive process. Although dramatic improvements in coding efficiency have been realized in recent years, the algorithms have become increasingly complex and there is a broader recognition that it is necessary to realize the capabilities of multicore processors. This article discusses how recent trends in parallel computing have influenced the design of modern video coding standards. Specifically, the authors discuss how the High Efficiency Video Coding (HEVC) standard, which is being jointly developed by ISO/IEC JTC1/SC29 WG11 (MPEG) and ITU-T SH16/Q.6 (VCEG), is looking at ways to implement the co-exploration between algorithm and architecture (CEAA) approach.


international conference on multimedia and expo | 2000

Animation framework for MPEG-4 systems

Mikaël Bourges-Sévenier; Euee S. Jang; James D. K. Kim

In this paper, we propose a new animation framework based on MPEG-4 and VRML file format. Both standards provide means to represent and compress the 3D contents. In this proposal, a framework of animation consists of behaviors to make up an animation scene. Rich animation through VRML and MPEG-4 will be possible through the introduction of new static tools. A new structure for animation with the new static tools is detailed in the paper. For instance, animation tracks are proposed to contain the description of behaviors with PROTOs. The proposed framework is dedicated for animation applications where interactivity is suited better than conventional broadcast scenario.


IEEE Signal Processing Magazine | 2014

Royalty-Free Video Coding Standards in MPEG [Standards in a Nutshell]

Kiho Choi; Euee S. Jang

On 7 March 2013, the Moving Picture Experts Group Licensing Association (MPEG LA) and Google announced that they have entered into an agreement granting Google a license to techniques, if the patents in MPEG LA might be essential to VP8. Under this agreement, hardware and software companies are free to use the VP8 technology when developing their own products. Considering that it is now common to find patent disputes in headline news, the patent issues related to video coding standards are no exception. In this article, we report on the recent developments in royalty-free codec standardization in MPEG, particularly Internet video coding (IVC), Web video coding (WVC), and video coding for browser, by reviewing the history of royalty-free standards in MPEG and the relationship between standards and patents.

Collaboration


Dive into the Euee S. Jang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Mattavelli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seung-wook Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge