Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shahan C. Nercessian is active.

Publication


Featured researches published by Shahan C. Nercessian.


IEEE Transactions on Image Processing | 2013

Non-Linear Direct Multi-Scale Image Enhancement Based on the Luminance and Contrast Masking Characteristics of the Human Visual System

Shahan C. Nercessian; Karen Panetta; Sos S. Agaian

Image enhancement is a crucial pre-processing step for various image processing applications and vision systems. Many enhancement algorithms have been proposed based on different sets of criteria. However, a direct multi-scale image enhancement algorithm capable of independently and/or simultaneously providing adequate contrast enhancement, tonal rendition, dynamic range compression, and accurate edge preservation in a controlled manner has yet to be produced. In this paper, a multi-scale image enhancement algorithm based on a new parametric contrast measure is presented. The parametric contrast measure incorporates not only the luminance masking characteristic, but also the contrast masking characteristic of the human visual system. The formulation of the contrast measure can be adapted for any multi-resolution decomposition scheme in order to yield new human visual system-inspired multi-scale transforms. In this article, it is exemplified using the Laplacian pyramid, discrete wavelet transform, stationary wavelet transform, and dual-tree complex wavelet transform. Consequently, the proposed enhancement procedure is developed. The advantages of the proposed method include: 1) the integration of both the luminance and contrast masking phenomena; 2) the extension of non-linear mapping schemes to human visual system inspired multi-scale contrast coefficients; 3) the extension of human visual system-based image enhancement approaches to the stationary and dual-tree complex wavelet transforms, and a direct means of; 4) adjusting overall brightness; and 5) achieving dynamic range compression for image enhancement within a direct multi-scale enhancement framework. Experimental results demonstrate the ability of the proposed algorithm to achieve simultaneous local and global enhancements.


EURASIP Journal on Advances in Signal Processing | 2011

Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion

Shahan C. Nercessian; Karen Panetta; Sos S. Agaian

New pixel- and region-based multiresolution image fusion algorithms are introduced in this paper using the Parameterized Logarithmic Image Processing (PLIP) model, a framework more suitable for processing images. A mathematical analysis shows that the Logarithmic Image Processing (LIP) model and standard mathematical operators are extreme cases of the PLIP model operators. Moreover, the PLIP model operators also have the ability to take on cases in between LIP and standard operators based on the visual requirements of the input images. PLIP-based multiresolution decomposition schemes are developed and thoroughly applied for image fusion as analysis and synthesis methods. The new decomposition schemes and fusion rules yield novel image fusion algorithms which are able to provide visually more pleasing fusion results. LIP-based multiresolution image fusion approaches are consequently formulated due to the generalized nature of the PLIP model. Computer simulations illustrate that the proposed image fusion algorithms using the Parameterized Logarithmic Laplacian Pyramid, Parameterized Logarithmic Discrete Wavelet Transform, and Parameterized Logarithmic Stationary Wavelet Transform outperform their respective traditional approaches by both qualitative and quantitative means. The algorithms were tested over a range of different image classes, including out-of-focus, medical, surveillance, and remote sensing images.


systems man and cybernetics | 2010

Boolean Derivatives With Application to Edge Detection for Imaging Systems

Sos S. Agaian; Karen Panetta; Shahan C. Nercessian; Ethan E. Danahy

This paper introduces a new concept of Boolean derivatives as a fusion of partial derivatives of Boolean functions (PDBFs). Three efficient algorithms for the calculation of PDBFs are presented. It is shown that Boolean function derivatives are useful for the application of identifying the location of edge pixels in binary images. The same concept is extended to the development of a new edge detection algorithm for grayscale images, which yields competitive results, compared with those of traditional methods. Furthermore, a new measure is introduced to automatically determine the parameter values used in the thresholding portion of the binarization procedure. Through computer simulations, demonstrations of Boolean derivatives and the effectiveness of the presented edge detection algorithm, compared with traditional edge detection algorithms, are shown using several synthetic and natural test images. In order to make quantitative comparisons, two quantitative measures are used: one based on the recovery of the original image from the output edge map and the Pratts figure of merit.


Optical Engineering | 2011

Shape-dependent canny edge detector

Karen Panetta; Sos S. Agaian; Shahan C. Nercessian; Ali A. Almunstashri

Edges characterize the boundaries of objects in images and are informative structural cues for computer vision and target/object detection and recognition systems. The Canny edge detector is widely regarded as the edge detection standard. It is fairly adaptable to different environments, as its parametric nature attempts to tailor the detection of edges based on image-dependent characteristics or the particular requirements of a given implementation. Though it has been used in a myriad of image processing tasks, the Canny edge detector is still vulnerable to edge losses, localization errors, and noise sensitivity. These issues are largely due to the key tradeoff made in the scale and size of the edge detection filters used by the algorithm. Small-scaled filters are sensitive to edges but also to noise, whereas large-scaled filters are robust to noise but could filter out fine details. In this paper, novel edge detection kernel generalizations and a shape-dependent edge detector are introduced to alleviate these shortcomings. While most standard edge detection algorithms are based on convolving the input image with fixed size square kernels, this paper will illustrate the benefits of different filter sizes, and more importantly, different kernel shapes for edge detection. Moreover, new edge fusion methods are introduced to more effectively combine the individual edge responses. Existing edge detectors, including the Canny edge detector, can be obtained from the generalized edge detector by specifying corresponding parameters and kernel shapes. The proposed representations and edge detector have been qualitatively and quantitatively evaluated on several different types of image data. Computer simulations demonstrate that nonsquare kernel approaches can outperform square kernel approaches such as Canny, Sobel, Prewitt, Roberts, and others, providing better tradeoffs between noise rejection, accurate edge localization, and resolution.


Proceedings of SPIE | 2011

An image similarity measure using enhanced human visual system characteristics

Shahan C. Nercessian; Sos S. Agaian; Karen Panetta

Image similarity measures are crucial for image processing applications which require comparisons to ideal reference images in order to assess performance. The Structural Similarity (SSIM), Gradient Structural Similarity (GSSIM), 4-component SSIM (4-SSIM) and 4-component GSSIM (4-GSSIM) indexes are motivated by the fact that the human visual system is adapted to extract local structural information. In this paper, we propose a new measure which enhances the gradient information used for quality assessment. An analysis of the proposed image similarity measure using the LIVE database of distorted images and their corresponding subjective evaluations of visual quality illustrate the improved performance of the proposed metric.


Journal of Electronic Imaging | 2012

Multiscale image fusion using an adaptive similarity-based sensor weighting scheme and human visual system-inspired contrast measure

Shahan C. Nercessian; Karen Panetta; Sos S. Agaian

The goal of image fusion is to combine multiple source images obtained using different capture techniques into a single image to provide an effective contextual enhancement of a scene for human or machine perception. In practice, considerable value can be gained in the fusion of images that are dissimilar or complementary in nature. However, in such cases, global weighting schemes may not sufficiently weigh the contribution of the pertinent information of the source images, while existing adaptive schemes calculate weights based on the relative amounts of salient features, which can cause severe artifacting or inadequate local luminance in the fusion result. Accordingly, a new multiscale image fusion algorithm is proposed. The approximation coefficient fusion rule of the algorithm is based on a novel similarity based weighting scheme capable of providing improved fusion results when the input source images are either similar or dissimilar to each other. Moreover, the algorithm employs a new detail coefficient fusion rule integrating a parametric multiscale contrast measure. The parametric nature of the contrast measure allows the degree to which psychophysical laws of human vision hold to be tuned based on image-dependent characteristics. Experimental results illustrate the superior performance of the proposed algorithm qualitatively and quantitatively.


ieee international conference on technologies for homeland security | 2008

Automatic Detection of Potential Threat Objects in X-ray Luggage Scan Images

Shahan C. Nercessian; Karen Panetta; Sos S. Agaian

The detection of threat objects using X-ray luggage scan images has become an important means of aviation security. Most airport screening is still based on the manual detection of potential threat objects by human experts. This paper presents a system for the automatic detection of potential threat objects in X-ray luggage scan images. Segmentation and edge-based feature vectors form the basis of the automatic detection system. The system is illustrated using handguns as the threat objects in question. The experimental results show that the system can effectively detect the handguns in X-ray luggage scan images with minimal amounts of false positives. Also, apart from the initial setup of the classification database, the algorithm is suitable for real-time applications.


Proceedings of SPIE | 2012

A multi-scale non-local means algorithm for image de-noising

Shahan C. Nercessian; Karen Panetta; Sos S. Agaian

A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.


Proceedings of SPIE | 2012

Multi-scale image enhancement using a second derivative-like measure of contrast

Shahan C. Nercessian; Sos S. Agaian; Karen Panetta

Image enhancement algorithms attempt to improve the visual quality of images for human or machine perception. Most direct multi-scale image enhancement methods are based on enhancing either absolute intensity changes or the Weber contrast at each scale, and have the advantage that the visual contrast is enhanced in a controlled manner. However, the human visual system is not adapted to absolute intensity changes, while the Weber contrast is unstable for small values of background luminance and potentially unsuitable for complex image patterns. The Michelson contrast measure is a bounded measure of contrast, but its expression does not allow a straightforward direct image enhancement formulation. Recently, a second derivative-like measure of contrast has been used to assess the performance of image enhancement algorithms. This measure is a Michelson-like contrast measure for which a direct image enhancement algorithm can be formulated. Accordingly, we propose a new direct multi-scale image enhancement algorithm based on the SDME in this paper. Experimental results illustrate the potential benefits of the proposed algorithm.


Proceedings of SPIE | 2009

A new reference-based measure for objective edge map evaluation

Shahan C. Nercessian; Sos S. Agaian; Karen Panetta

Edge detection is an important preprocessing task which has been used extensively in image processing. As many applications heavily rely on edge detection, effective and objective edge detection evaluation is crucial. Objective edge map evaluation measures are an important means of assessing the performance of edge detectors under various circumstances and in determining the most suitable edge detector or edge detector parameters. Quantifiable criteria for objective edge map evaluation are established relative to a ground truth, and the weaknesses and limitations in the Pratts Figure of Merit (FOM), the objective reference-based edge map evaluation standard, are discussed. Based on the established criteria, a new reference-based measure for objective edge map evaluation is presented. Experimental results using synthetic images and their ground truths show that the new measure for objective edge map evaluation outperforms Pratts FOM visually as it takes into account more features in its evaluation.

Collaboration


Dive into the Shahan C. Nercessian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sos S. Agaian

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Ali A. Almunstashri

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge