Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lina J. Karam is active.

Publication


Featured researches published by Lina J. Karam.


IEEE Transactions on Image Processing | 2009

A No-Reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur (JNB)

Rony Ferzli; Lina J. Karam

This work presents a perceptual-based no-reference objective image sharpness/blurriness metric by integrating the concept of just noticeable blur into a probability summation model. Unlike existing objective no-reference image sharpness/blurriness metrics, the proposed metric is able to predict the relative amount of blurriness in images with different content. Results are provided to illustrate the performance of the proposed perceptual-based sharpness metric. These results show that the proposed sharpness metric correlates well with the perceived sharpness being able to predict with high accuracy the relative amount of blurriness in images with different content.


IEEE Transactions on Broadcasting | 2011

Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison

Shyamprasad Chikkerur; Vijay Sundaram; Martin Reisslein; Lina J. Karam

With the increasing demand for video-based applications, the reliable prediction of video quality has increased in importance. Numerous video quality assessment methods and metrics have been proposed over the past years with varying computational complexity and accuracy. In this paper, we introduce a classification scheme for full-reference and reduced-reference media-layer objective video quality assessment methods. Our classification scheme first classifies a method according to whether natural visual characteristics or perceptual (human visual system) characteristics are considered. We further subclassify natural visual characteristics methods into methods based on natural visual statistics or natural visual features. We subclassify perceptual characteristics methods into frequency or pixel-domain methods. According to our classification scheme, we comprehensively review and compare the media-layer objective video quality models for both standard resolution and high definition video. We find that the natural visual statistics based MultiScale-Structural SIMilarity index (MS-SSIM), the natural visual feature based Video Quality Metric (VQM), and the perceptual spatio-temporal frequency-domain based MOtion-based Video Integrity Evaluation (MOVIE) index give the best performance for the LIVE Video Quality Database.


IEEE Transactions on Image Processing | 2011

A No-Reference Image Blur Metric Based on the Cumulative Probability of Blur Detection (CPBD)

Niranjan D. Narvekar; Lina J. Karam

This paper presents a no-reference image blur metric that is based on the study of human blur perception for varying contrast values. The metric utilizes a probabilistic model to estimate the probability of detecting blur at each edge in the image, and then the information is pooled by computing the cumulative probability of blur detection (CPBD). The performance of the metric is demonstated by comparing it with existing no-reference sharpness/blurriness metrics for various publicly available image databases.


IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1995

Complex Chebyshev approximation for FIR filter design

Lina J. Karam; James H. McClellan

The alternation theorem is at the core of efficient real Chebyshev approximation algorithms. In this paper, the alternation theorem is extended from the real-only to the complex case. The complex FIR filter design problem is reformulated so that it clearly satisfies the Haar condition of Chebyshev approximation. An efficient exchange algorithm is derived for designing complex FIR filters in the Chebyshev sense. By transforming the complex error function, the Remez exchange algorithm can be used to compute the optimal complex Chebyshev approximation. The algorithm converges to the optimal solution whenever the complex Chebyshev error alternates; in all other cases, the algorithm converges to the optimal Chebyshev approximation over a subset of the desired bands. The new algorithm is a generalization of the Parks-McClellan algorithm, so that arbitrary magnitude and phase responses can be approximated. Both causal and noncausal filters with complex or real-valued impulse responses can be designed. Numerical examples are presented to illustrate the performance of the proposed algorithm. >


IEEE Transactions on Image Processing | 2002

Adaptive image coding with perceptual distortion control

Ingo S. Hontsch; Lina J. Karam

This paper presents a discrete cosine transform (DCT)-based locally adaptive perceptual image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder uses a locally adaptive perceptual quantization scheme based on a tractable perceptual distortion metric. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels in order to meet the desired target perceptual distortion. The proposed coding scheme is flexible in that it can be easily extended to work with any subband-based decomposition in addition to block-based transform methods. Compared to existing perceptual coding methods, the proposed perceptual coding method exhibits superior performance in terms of bit rate and distortion control. Coding results are presented to illustrate the performance of the presented coding scheme.


IEEE Transactions on Image Processing | 2000

Locally adaptive perceptual image coding

Ingo S. Hontsch; Lina J. Karam

Most existing efforts in image and video compression have focused on developing methods to minimize not perceptual but rather mathematically tractable, easy to measure, distortion metrics. While nonperceptual distortion measures were found to be reasonably reliable for higher bit rates (high-quality applications), they do not correlate well with the perceived quality at lower bit rates and they fail to guarantee preservation of important perceptual qualities in the reconstructed images despite the potential for a good signal-to-noise ratio (SNR). This paper presents a perceptual-based image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder is based on a locally adaptive perceptual quantization scheme for compressing the visual data. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion based on a subband decomposition. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels to the local amount of masking present at the level of each subband transform coefficient. Compared to the existing non-locally adaptive perceptual quantization methods, the new locally adaptive algorithm exhibits superior performance and does not require additional side information. This is accomplished by estimating the amount of available masking from the already quantized data and linear prediction of the coefficient under consideration. By virtue of the local adaptation, the proposed quantization scheme is able to remove a large amount of perceptually redundant information. Since the algorithm does not require additional side information, it yields a low entropy representation of the image and is well suited for perceptually lossless image compression.


IEEE Transactions on Image Processing | 2000

Morphological text extraction from images

Yassin M. Y. Hasan; Lina J. Karam

This paper presents a morphological technique for text extraction from images. The proposed morphological technique is insensitive to noise, skew and text orientation. It is also free from artifacts that are usually introduced by both fixed/optimal global thresholding and fixed-size block-based local thresholding. Examples are presented to illustrate the performance of the proposed method.


IEEE Transactions on Image Processing | 2006

JPEG2000 encoding with perceptual distortion control

Zhen Liu; Lina J. Karam; Andrew B. Watson

In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.


IEEE Signal Processing Magazine | 2009

Trends in multicore DSP platforms

Lina J. Karam; Ismail Alkamal; Alan Gatherer; Gene A. Frantz; David V. Anderson; Brian L. Evans

In the last two years, the embedded DSP market has been swept up by the general increase in interest in multicore that has been driven by companies such as Intel and Sun. One reason for this is that there is now a lot of focus on tooling in academia and also a willingness on the part of users to accept new programming paradigms. This industry-wide effort will have an effect on the way multicore DSPs are programmed and perhaps architected. But it is too early to say in what way this will occur. Programming multicore DSPs remains very challenging. The problem of how to take a piece of sequential code and optimally partition it across multiple cores remains unsolved. Hence, there will naturally be a lot of variations in the approaches taken. Equally important is the issue of debugging and visibility. Developing effective and easy-to-use code development and real-time debug tools is tremendously important as the opportunity for bugs goes up significantly when one starts to deal with both time and space. The markets that DSP plays in have unique features in their desire for low power, low cost, and hard real-time processing, with an emphasis on mathematical computation. How well the multicore research being performed presently in academia will address these concerns remains to be seen.


quality of multimedia experience | 2009

A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection

Niranjan D. Narvekar; Lina J. Karam

In this paper, a no-reference objective sharpness metric based on a cumulative probability of blur detection is proposed. The metric is evaluated by taking into account the Human Visual System (HVS) response to blur distortions. The perceptual significance of the metric is validated through subjective experiments. It is shown that the proposed metric results in a very good correlation with subjective scores especially for images with varying foreground and background perceived blur qualities. This is accomplished with a significantly lower computational complexity as compared to existing methods that take into account the visual attention information.

Collaboration


Dive into the Lina J. Karam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rony Ferzli

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Wei Jung Chien

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

James H. McClellan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Liu

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Asaad F. Said

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge