Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei Jung Chien is active.

Publication


Featured researches published by Wei Jung Chien.


Multimedia Tools and Applications | 2010

BLAST-DVC: BitpLAne SelecTive distributed video coding

Wei Jung Chien; Lina J. Karam

This paper presents a BitpLAne SelecTive (BLAST) distributed video coding (DVC) system. In the proposed system, the significance of each bitplane is measured at the decoder based on an estimated distortion-rate ratio that makes use of a correlation model for the original source information and the side information. Only the syndrome bits of the bitplanes that have estimated distortion-rate ratios higher than a target distortion-rate ratio, are transmitted and are used to decode the associated bitplanes. The remaining bitplanes are estimated using a minimum-distance symbol reconstruction scheme which makes use of the side information and the LDPCA-decoded bitplanes. Coding results and comparisons with existing DVC schemes and with H.264 intra- and inter-frame coding are presented to illustrate the performance of the proposed system.


asilomar conference on signals, systems and computers | 2008

BitpLAne SelecTive distributed video coding

Wei Jung Chien; Lina J. Karam

This paper presents a bitplane selective (BLAST) distributed video coding (DVC) system. In the proposed system, the importance of each bitplane is measured at the decoder, and only the bitplanes with higher importance would be decoded. A minimum-distance symbol reconstruction is proposed to estimate the rest of the bitplanes by using the side information and the decoded bitplanes. The importance is measured with an estimated rate-distortion ratio that makes use of a correlation model for the original source information and the side information. Coding results and comparisons with existing DVC schemes and with H.264 interframe coding are presented to illustrate the performance of the proposed system.


international conference on image processing | 2008

Rate-distortion based selective decoding for pixel-domain distributed video coding

Wei Jung Chien; Lina J. Karam; Glen P. Abousleman

This paper presents a rate-distortion based selective decoding for pixel-domain distributed video compression (DVC) system. In the proposed system, the Wyner-Ziv frames are divided into several sub-images. Each of these sub-images is encoded and decoded independently. At the decoder, the rate-distortion ratio is estimated for each bitplane of the sub-images. Only the bitplanes with high rate-distortion ratios are Turbo decoded. A minimum-distance symbol reconstruction is proposed to estimate the rest of the bitplanes by using the side information and Turbo-decoded bitplanes. More accurate placement of the parity bits results in an improved system performance, especially for video sequences with a relatively large static background. Coding results and comparison with existing DVC schemes and with H.264 interframe coding are presented to illustrate the performance of the proposed system.


international conference on acoustics, speech, and signal processing | 2007

Block-Adaptive Wyner-Ziv Coding for Transform-Domain Distributed Video Coding

Wei Jung Chien; Lina J. Karam; Glen P. Abousleman

This paper presents a transform-domain distributed video compression (DVC) system with block-adaptive Wyner-Ziv coding. In the proposed system, the source symbols are reformatted into several blocks. The parity bits are requested only for the blocks with decoding errors. More accurate placement of the parity bits results in an improved system performance that is much closer to the performance of H.263 interframe coding as compared to existing DCT-based DVC schemes. Coding results and comparison with existing DVC schemes and with H.263 are presented to illustrate the performance of the proposed system.


international conference on acoustics, speech, and signal processing | 2006

Distributed Video Coding With Lossy Side Information

Wei Jung Chien; Lina J. Karam; Glen P. Abousleman

The paper presents a distributed video compression (DVC) system with improved rate-distortion performance that is much closer to the performance of H.263 interframe coding as compared to existing DCT-based DVC schemes. The system performance, in terms of compression rate and video quality, is affected by the difference between the source information and the generated side information at the decoder. To improve the accuracy of the side information, a modified three-dimensional recursive search block matching algorithm is proposed. The performance of the proposed DVC system is also investigated when the side information is estimated from lossy video frames that are compressed at different bit rates. Coding results and comparison with existing DVC schemes and with H.263 are presented to illustrate the performance of the proposed system


international conference on image processing | 2009

AQT-DVC: Adaptive Quantization for transform-domain distributed video coding

Wei Jung Chien; Lina J. Karam

This paper presents a rate-distortion based Adaptive QuanTization (AQT) scheme for transform-domain distributed video coding (DVC). In the proposed DVC system, the Wyner-Ziv frame is divided into partitions and is adaptively quantized in the transform domain based on estimated local rate-distortion (R-D) characteristics. The RD characteristics are estimated at the decoder using a correlation model between the original source information and the side information. Rate-distortion performance results and comparisons with existing DVC schemes and with H.264 are presented.


visual information processing conference | 2006

Automatic network-adaptive ultra-low-bit-rate video coding

Wei Jung Chien; Tuyet Trang Lam; Glen P. Abousleman; Lina J. Karam

This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Region-of-interest-based ultra-low-bit-rate video coding

Wei Jung Chien; Nabil G. Sadaka; Glen P. Abousleman; Lina J. Karam

In this paper, we present a region-of-interest-based video coding system for use in real-time applications. Region-of-interest (ROI) coding methodology specifies that targets or ROIs be coded at higher fidelity using a greater number of available bits, while the remainder of the scene or background is coded using a fewer number of bits. This allows the target regions within the scene to be well preserved, while dramatically reducing the number of bits required to code the video sequence, thus reducing the transmission bandwidth and storage requirements. In the proposed system, the ROI contours can be selected arbitrarily by the user via a graphical user interface (GUI), or they can be specified via a text file interface by an automated process such as a detection/tracking algorithm. Additionally, these contours can be specified at either the transmitter or receiver. Contour information is efficiently exchanged between the transmitter and receiver and can be adjusted on the fly and in real time. Coding results are presented for both electro-optical (EO) and infrared (IR) video sequences to demonstrate the performance of the proposed system.


Mobile multimedia/image processing for military and security applications. Conference | 2007

Super-resolution-based enhancement for real-time ultra-low-bit-rate video coding

Wei Jung Chien; Glen P. Abousleman; Lina J. Karam

This paper presents a software-only, real-time video coder/decoder (codec) with super-resolution-based enhancement for ultra-low-bit-rate compression. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths as low as 500 bits/second. Highly compressed video exhibits severe coding artifacts that degrade visual quality. To lower the level of noise and retain the sharpness of the video frames, we build on our previous work in super-resolution-based video enhancement and propose a new version that is suitable for real-time video coding systems. The adopted super-resolution-based enhancement uses a constrained set of motion vectors that is computed from the original (uncompressed) video at the encoder. Artificial motion is also added to the difference frame to maximize the enhancement performance. The encoder can transmit either the full set of motion vectors or the constrained set of motion vectors depending upon the available bandwidth. At the decoder, each pixel of the decoded frame is assigned to a motion vector from the constrained motion vector set. L2-norm minimization super-resolution is then applied to the decoded frame set (previous frame, current frame, and next frame). A selective motion estimation scheme is proposed to prevent ghosting, which otherwise would result from the super-resolution enhancement when the motion estimation fails to find appropriate motion vectors. Results using the proposed system demonstrate significant improvements in the quantitative and visual quality of the coded video sequences.


Proceedings of SPIE | 2009

Deghosting based on in-loop selective filtering using motion vector information for low-bit-rate-video coding

Niranjan D. Narvekar; Wei Jung Chien; Nabil G. Sadaka; Glen P. Abousleman; Lina J. Karam

In this paper, a technique is presented to alleviate ghosting artifacts in the decoded video sequences for low-bit-rate video coding. Ghosting artifacts can be defined as the appearance of ghost like outlines of an object in a decoded video frame. Ghosting artifacts result from the use of a prediction loop in the video codec, which is typically used to increase the coding efficiency of the video sequence. They appear in the presence of significant frame-to-frame motion in the video sequence, and are typically visible for several frames until they eventually die out or an intra-frame refresh occurs. Ghosting artifacts are particularly annoying at low bit rates since the extreme loss of information tends to accentuate their appearance. To mitigate this effect, a procedure with selective in-loop filtering based on motion vector information is proposed. In the proposed scheme, the in-loop filter is applied only to the regions where there is motion. This is done so as not to affect the regions that are devoid of motion, since ghosting artifacts only occur in high-motion regions. It is shown that the proposed selective filtering method dramatically reduces ghosting artifacts in a wide variety of video sequences with pronounced frame-to-frame motion, without degrading the motionless regions.

Collaboration


Dive into the Wei Jung Chien's collaboration.

Top Co-Authors

Avatar

Lina J. Karam

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge