Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ligang Lu is active.

Publication


Featured researches published by Ligang Lu.


Signal Processing-image Communication | 2004

Video Quality Assessment Based on Structural Distortion Measurement

Zhou Wang; Ligang Lu; Alan C. Bovik

Objective image and video quality measures play important roles in a variety of image and video pro- cessing applications, such as compression, communication, printing, analysis, registration, restoration, enhancement and watermarking. Most proposed quality assessment ap- proaches in the literature are error sensitivity-based meth- ods. In this paper, we follow a new philosophy in designing image and video quality metrics, which uses structural dis- tortion as an estimate of perceived visual distortion. A com- putationally ecient approach is developed for full-reference (FR) video quality assessment. The algorithm is tested on the video quality experts group (VQEG) Phase I FR-TV test data set. Keywords—Image quality assessment, video quality assess- ment, human visual system, error sensitivity, structural dis- tortion, video quality experts group (VQEG)


international conference on acoustics, speech, and signal processing | 2002

Why is image quality assessment so difficult

Zhou Wang; Alan C. Bovik; Ligang Lu

Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate with perceived quality measurement. Unfortunately, only limited success has been achieved. In this paper, we provide some insights on why image quality assessment is so difficult by pointing out the weaknesses of the error sensitivity based framework, which has been used by most image quality assessment approaches in the literature. Furthermore, we propose a new philosophy in designing image quality metrics: The main function of the human eyes is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural distortion should be a good approximation of perceived image distortion. Based on the new philosophy, we implemented a simple but effective image quality indexing algorithm, which is very promising as shown by our current results.


IEEE Transactions on Image Processing | 2003

Foveation scalable video coding with automatic fixation selection

Zhou Wang; Ligang Lu; Alan C. Bovik

Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks.


international conference on image processing | 2002

Video quality assessment using structural distortion measurement

Zhou Wang; Ligang Lu; Alan C. Bovik

Objective image/video quality measures play important roles in various image/video processing applications, such as compression, communication, printing, analysis, registration, restoration and enhancement. Most proposed quality assessment approaches in the literature are error sensitivity-based methods. We follow a new philosophy in designing image/video quality metrics, which uses structural distortion as an estimation of perceived visual distortion. We develop a new approach for video quality assessment. Experiments on the video quality experts group (VQEG) test data set shows that the new quality measure has higher correlation with subjective quality measurement than the proposed methods in VQEGs Phase I tests for full-reference video quality assessment.


international conference on multimedia and expo | 2002

Full-reference video quality assessment considering structural distortion and no-reference quality evaluation of MPEG video

Ligang Lu; Zhou Wang; Alan C. Bovik; Jack Kouloheris

There has been an increasing need recently to develop objective quality measurement techniques that can predict perceived video quality automatically. This paper introduces two video quality assessment models. The first one requires the original video as a reference and is a structural distortion measurement based approach, which is different from traditional error sensitivity based methods. Experiments on the video quality experts group (VQEG) test data set show that the new quality measure has higher correlation with subjective quality evaluation than the proposed methods in VQEGs Phase I tests for full-reference video quality assessment. The second model is designed for quality estimation of compressed MPEG video stream without referring to the original video sequence. Preliminary experimental results show that it correlates well with our full-reference quality assessment model.


International Symposium on Optical Science and Technology | 2001

Foveated wavelet image quality index

Zhou Wang; Alan C. Bovik; Ligang Lu; Jack Kouloheris

The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.


international parallel and distributed processing symposium | 2012

Reducing Data Movement Costs: Scalable Seismic Imaging on Blue Gene

Michael P. Perrone; Lurng-Kuo Liu; Ligang Lu; Karen A. Magerlein; Changhoan Kim; Irina Fedulova; Artyom Semenikhin

We present an optimized Blue Gene/P implementation of Reverse Time Migration, a seismic imaging algorithm widely used in the petroleum industry today. Our implementation is novel in that it uses large communication bandwidth and low latency to convert an embarrassingly parallel problem into one that can be efficiently solved using massive domain partitioning. The success of this seemingly counterintuitive approach is the result of several key aspects of the imaging problem, including very regular and local communication patterns, balanced compute and communication requirements, scratch data handling, multiple-pass approaches, and most importantly, the fact that partitioning the problem allows each sub-problem to fit in cache, dramatically increasing locality and bandwidth and reducing latency. This approach can be easily extended to next-generation imaging algorithms currently being developed. In this paper we present details of our implementation, including application-scaling results on Blue Gene/P.


visual communications and image processing | 2008

Wyner-Ziv video compression using rateless LDPC codes

Dake He; Ashish Jagmohan; Ligang Lu; Vadim Sheinin

In this paper we consider Wyner-Ziv video compression using rateless LDPC codes. It is shown that the advantages of using rateless LDPC codes in Wyner-Ziv video compression, in comparison to using traditional fixed-rate LDPC codes, are at least threefold: 1) it significantly reduces the storage complexity; 2) it allows seamless integration with mode selection; and 3) it greatly improves the overall systems performance. Experimental results on the standard CIF-sized sequence mobile_and_calendar show that by combining rateless LDPC coding with simple skip mode selection, one can build a Wyner-Ziv video compression system that is, at rate 0.2 bits per pixel, about 2.25dB away from the standard JM software implementation of the H.264 main profile, more than 8.5dB better than H.264 Intra where all frames are H.264 coded intrapredicted frames, and about 2.3dB better than the same Wyner-Ziv system using fixed-rate LDPC coding. In terms of encoding complexity, the Wyner-Ziv video compression system is two orders of magnitude less complex than the JM implementation of the H.264 main profile.


international conference on image processing | 2007

Side Information Generation for Distributed Video Coding

Ligang Lu; Dake He; Ashish Jagmohan

Side information (SI) generation is one of the key components of a Wyner-Ziv coder. In this paper we present a novel multi-frame SI generation approach which uses adaptive temporal filtering to estimate the pixel values for SI and motion vector filtering for refinement. For temporal filtering, we derive the optimal mean squared error temporal filter when the noise can be evaluated, and propose a similarity weighted temporal filter when the knowledge of the noise is not available. The temporal filter adapts on the quality of the motion estimation. The quality of SI generation is further improved by using motion vector filtering to reduce the noise effect from motion estimation. Experimental results indicate that the proposed SI generation approach yields good performance in terms of SI quality and conditional entropy.


acm sigplan symposium on principles and practice of parallel programming | 2013

Multi-level parallel computing of reverse time migration for seismic imaging on blue Gene/Q

Ligang Lu; Karen A. Magerlein

Blue Gene/Q (BG/Q) is an early representative of increasing scale and thread count that will characterize future HPC systems: large counts of nodes, cores, and threads; and a rich programming environment with many degrees of freedom in parallel computing optimization. So it is both a challenge and an opportunity to it to accelerate the seismic imaging applications to the unprecedented levels that will significantly advance the technologies for the oil and gas industry. In this work we aim to address two important questions: how HPC systems with high levels of scale and thread count will perform in real applications; and how systems with many degrees of freedom in parallel programming can be calibrated to achieve optimal performance. Based on BG/Qs architecture features and RTM workload characteristics, we developed massive domain partition, MPI , and SIMD Our detailed deep analyses in various aspects of optimization also provide valuable experience and insights into how can be utilized to facilitate the advance of seismic imaging technologies. Our BG/Q RTM solution achieved a 14.93x speedup over the BG/P implementation. Our multi-level parallelism strategies for Reverse Time Migration (RTM) seismic imaging computing on BG/Q provides an example of how HPC systems like BG/Q can accelerate applications to a new level.

Collaboration


Dive into the Ligang Lu's collaboration.

Researchain Logo
Decentralizing Knowledge