Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nam Ling is active.

Publication


Featured researches published by Nam Ling.


IEEE Transactions on Circuits and Systems for Video Technology | 2006

On Lagrange multiplier and quantizer adjustment for H.264 frame-layer video rate control

Minqiang Jiang; Nam Ling

H.264/AVC encoder employs a complex mode-decision technique based on rate-distortion optimization. It calculates rate-distortion cost (RDcost) for all possible modes to choose the best one having the minimum RDcost. This paper presents a frame-layer rate control for H.264/AVC that computes the Lagrange multiplier (/spl lambda//sub MODE/) for mode decision by using a quantization parameter (QP) which may be different from that used for encoding. At the same time, we also compare actual bits produced by previous macroblocks (MBs) with the total bits allocated to these MBs to further modify /spl lambda//sub MODE/. The objective of these measures aims to produce bits as close to the frame target bits for rate control as possible. This is very important in the case of low-bit-rate tight buffer applications. In order to obtain an accurate QP for a frame, we employ a complexity-based bit-allocation scheme and a QP adjustment method. Simulation results comparing with the H.264 Joint Video Team (JVT) rate control method show that the H.264 encoder, using the proposed algorithm, achieves a visual quality improvement of about 0.56 dB, performs better for buffer overflow and underflow, and achieves a smaller PSNR deviation.


international symposium on circuits and systems | 2004

Improved frame-layer rate control for H.264 using MAD ratio

Minqiang Jiang; Xiaoquan Yi; Nam Ling

In recent years, rate control plays an increasing important role in real-time video communication applications using MPEG-4 AVC/H.264. An important step in many existing rate control algorithms, which employs the quadratic rate-distortion (R-D) model, is to determine the target bits for each P frame. This paper aims in improving video distortion, due to high motions or scene changes, by more accurately predicting frame complexity using the statistics of previously encoded frames. We use mean absolute difference (MAD) ratio as a measure for global frame encoding complexity. Bit budget is allocated to frames according to their MAD ratio, combined with the bits computed based on their buffer status. Simulation results show that the H.264 coder, using our proposed algorithm with virtually little computational complexity added, effectively alleviates PSNR surges and sharp drops for frames caused by high motions or scene changes.


IEEE Transactions on Multimedia | 2006

Low-delay rate control for real-time H.264/AVC video coding

Minqiang Jiang; Nam Ling

This paper presents an efficient rate control scheme for the H.264/AVC video coding in low-delay environments. In our scheme, we propose an enhancement to the buffer-status based H.264/AVC bit allocation method. The enhancement is by using a PSNR-based frame complexity estimation to improve the existing mean absolute difference based (MAD-based) complexity measure. Bit allocation to each frame is not just computed by encoder buffer status but also adjusted by a combined frame complexity measure. To prevent the buffer from undesirable overflow or underflow under small buffer size constraint in low delay environment,the computed quantization parameter (QP) for the current MB is adjusted based on actual encoding results at that point. We also propose to compare the bits produced by each mode with the average target bits per MB to dynamically modify Lagrange multiplier (/spl lambda//sub MODE/) for mode decision. The objective of QP and /spl lambda//sub MODE/ adjustment is to produce bits as close to the frame target as possible, which is especially important for low delay applications. Simulation results show that the H.264 coder, using our proposed scheme, obtains significant improvement for the mismatch ratio of target bits and actual bits in all testing cases, achieves a visual quality improvement of about 0.6 dB on the average, performs better for buffer overflow and underflow,and achieves a similar or smaller PSNR deviation.


IEEE Transactions on Multimedia | 2005

An unequal packet loss resilience scheme for video over the Internet

Xiaokang Yang; Ce Zhu; Zheng Guo Li; Xiao Lin; Nam Ling

We present an unequal packet loss resilience scheme for robust transmission of video over the Internet. By jointly exploiting the unequal importance existing in different levels of syntax hierarchy in video coding schemes, GOP-level and Resynchronization-packet-level Integrated Protection (GRIP) is designed for joint unequal loss protection (ULP) in these two levels using forward error correction (FEC) across packets. Two algorithms are developed to achieve efficient FEC assignment for the proposed GRIP framework: a model-based FEC assignment algorithm and a heuristic FEC assignment algorithm. The model-based FEC assignment algorithm is to achieve optimal allocation of FEC codes based on a simple but effective performance metric, namely distortion-weighted expected length of error propagation, which is adopted to quantify the temporal propagation effect of packet loss on video quality degradation. The heuristic FEC assignment algorithm aims at providing a much simpler yet effective FEC assignment with little computational complexity. The proposed GRIP together with any of the two developed FEC assignment algorithms demonstrates strong robustness against burst packet losses with adaptation to different channel status.


IEEE Transactions on Circuits and Systems for Video Technology | 2003

A unified architecture for real-time video-coding systems

Zhengguo Li; Ce Zhu; Nam Ling; Xiaokang Yang; Genan Feng; Si Wu; Feng Pan

This paper presents a unified architecture for a live video over the Internet with emphasis on solving some challenging problems such as network bandwidth adaptation for rate and congestion, loss packet recovery, joint source and channel coding, and packetization. In our architecture, a time-varying bit rate for the source coding and time-varying ratios for the channel coding are simultaneously computed by a new congestion-control protocol. An adaptive rate-control scheme is then proposed to calculate quantization parameters and to determine the number of skipping frames corresponding to the bit rate. An adaptive unequal error-control scheme is also provided to protect the bitstream. Furthermore, a simple and MPEG-4 standard compatible algorithm is designed to packetize generated bitstream at the SyncLayer by using the existing resynchronization marker approach. With the proposed architecture, the coding efficiency and the robustness of the whole system are improved greatly.


EURASIP Journal on Advances in Signal Processing | 2006

Rate control for H.264 with two-step quantization parameter determination but single-pass encoding

Xiaokang Yang; Yongmin Tan; Nam Ling

We present an efficient rate control strategy for H.264 in order to maximize the video quality by appropriately determining the quantization parameter (QP) for each macroblock. To break the chicken-and-egg dilemma resulting from QP-dependent rate-distortion optimization (RDO) in H.264, a preanalysis phase is conducted to gain the necessary source information, and then the coarse QP is decided for rate-distortion (RD) estimation. After motion estimation, we further refine the QP of each mode using the obtained actual standard deviation of motion-compensated residues. In the encoding process, RDO only performs once for each macroblock, thus one-pass, while QP determination is conducted twice. Therefore, the increase of computational complexity is small compared to that of the JM 9.3 software. Experimental results indicate that our rate control scheme with two-step QP determination but single-pass encoding not only effectively improves the average PSNR but also controls the target bit rates well.


international symposium on circuits and systems | 2005

Fast pixel-based video scene change detection

Xiaoquan Yi; Nam Ling

This paper proposes a new simple and efficient method to detect abrupt scene change based on only pixel values. Conventional pixel-based techniques can produce a significant number of false detections and missed detections when high motion and brightness variations are present in the video. To increase scene change detection accuracy yet maintain a low computational complexity, a two-phase strategy is developed. Frames are first tested against the mean absolute frame differences (MAFD) via a relaxed threshold, which rejects about 90% of the non-scene change frames. The rest 10% of the frames are then normalized via a histogram equalization process. A top-down approach is applied to refine the decision via four features: MAFD and three other features based on normalized pixel values - signed difference of mean absolute frame difference (SDMAFD*), absolute difference of frame variance (ADFV*), and mean absolute frame differences (MAFD*). Experimental results show that our method contributes to higher detection rate and lower missed detection rate while maintaining a low computational complexity, which is attractive for real-time video applications.


international conference on multimedia and expo | 2013

Fast Depth Modeling Mode selection for 3D HEVC depth intra coding

Zhouye Gu; Jianhua Zheng; Nam Ling; Philipp Zhang

This paper proposes a fast mode decision algorithm for 3D High Efficiency Video Coding (3D-HEVC) depth intra coding. In the current 3D-HEVC design, it is observed that for most the cases, Depth Modeling Modes (DMM) full-RD search could be skipped since the coding units (CUs) of depth map are usually very flat or smooth and DMMs are designed for CUs with edge or sharp transition. Using Most Probable Mode (MPM) as the indicator, we propose a fast DMM selection algorithm to speed up the encoding process. The test result for the proposed fast algorithm reports 27.8% encoding time saving with 0.31% bitrate increasing in coded and synthesized view for All-Intra test case.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

H.264/Advanced Video Control Perceptual Optimization Coding Based on JND-Directed Coefficient Suppression

Zhengyi Luo; Li Song; Shibao Zheng; Nam Ling

The field of video coding has been exploring the compact representation of video data, where perceptual redundancies in addition to signal redundancies are removed for higher compression. Many research efforts have been dedicated to modeling the human visual systems characteristics. The resulting models have been integrated into video coding frameworks in different ways. Among them, coding enhancements with the just noticeable distortion (JND) model have drawn much attention in recent years due to its significant gains. A common application of the JND model is the adjustment of quantization by a multiplying factor corresponding to the JND threshold. In this paper, we propose an alternative perceptual video coding method to improve upon the current H.264/advanced video control (AVC) framework based on an independent JND-directed suppression tool. This new tool is capable of finely tuning the quantization using a JND-normalized error model. To make full use of this new rate distortion adjustment component the Lagrange multiplier for rate distortion optimization is derived in terms of the equivalent distortion. Because the H.264/AVC integer discrete cosine transform (DCT) is different from classic DCT, on which state-of-the-art JND models are computed, we analytically derive a JND mapping formula between the integer DCT domain and the classic DCT domain which permits us to reuse the JND models in a more natural way. In addition, the JND threshold can be refined by adopting a saliency algorithm in the coding framework and we reduce the complexity of the JND computation by reusing the motion estimation of the encoder. Another benefit of the proposed scheme is that it remains fully compliant with the existing H.264/AVC standard. Subjective experimental results show that significant bit saving can be obtained using our method while maintaining a similar visual quality to the traditional H.264/AVC coded video.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

A Context-Adaptive Prediction Scheme for Parameter Estimation in H.264/AVC Macroblock Layer Rate Control

Jianpeng Dong; Nam Ling

In this paper, we present a novel context-adaptive model parameter prediction scheme for improving the estimation accuracy of the mean absolute difference (MAD) of texture and other model parameters in the linear rate quantization (R-Q) model-based H.264/AVC macroblock layer rate control for low bit rate real-time applications. The context-adaptive prediction scheme simultaneously exploits both spatial correlations and temporal correlations among the neighboring macroblocks within a so-called context of a macroblock. The location and shape of the context as well as the number of neighboring macroblocks in the context are adaptively computed according to local video signal characteristics using a Manhattan distance metric and an improved 2-D sliding window method. The proposed context-adaptive model parameter prediction scheme effectively suppresses the detrimental oscillations of estimated model parameters. Extensive experiments show that compared to the recent H.264/AVC reference software, macroblock layer rate control algorithm using our proposed context-adaptive prediction scheme significantly improves the MAD and model parameter prediction accuracy and bit achievement accuracy, and hence obtains much better rate distortion performance.

Collaboration


Dive into the Nam Ling's collaboration.

Top Co-Authors

Avatar

Magdy A. Bayoumi

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoquan Yi

Santa Clara University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Song

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lingzhi Liu

Santa Clara University

View shared research outputs
Researchain Logo
Decentralizing Knowledge