Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric J. Wharton is active.

Publication


Featured researches published by Eric J. Wharton.


systems man and cybernetics | 2008

Human Visual System-Based Image Enhancement and Logarithmic Contrast Measure

Karen Panetta; Eric J. Wharton; Sos S. Agaian

Varying scene illumination poses many challenging problems for machine vision systems. One such issue is developing global enhancement methods that work effectively across the varying illumination. In this paper, we introduce two novel image enhancement algorithms: edge-preserving contrast enhancement, which is able to better preserve edge details while enhancing contrast in images with varying illumination, and a novel multihistogram equalization method which utilizes the human visual system (HVS) to segment the image, allowing a fast and efficient correction of nonuniform illumination. We then extend this HVS-based multihistogram equalization approach to create a general enhancement method that can utilize any combination of enhancement algorithms for an improved performance. Additionally, we propose new quantitative measures of image enhancement, called the logarithmic Michelson contrast measure (AME) and the logarithmic AME by entropy. Many image enhancement methods require selection of operating parameters, which are typically chosen using subjective methods, but these new measures allow for automated selection. We present experimental results for these methods and make a comparison against other leading algorithms.


systems man and cybernetics | 2011

Parameterized Logarithmic Framework for Image Enhancement

Karen Panetta; Sos S. Agaian; Yicong Zhou; Eric J. Wharton

Image processing technologies such as image enhancement generally utilize linear arithmetic operations to manipulate images. Recently, Jourlin and Pinoli successfully used the logarithmic image processing (LIP) model for several applications of image processing such as image enhancement and segmentation. In this paper, we introduce a parameterized LIP (PLIP) model that spans both the linear arithmetic and LIP operations and all scenarios in between within a single unified model. We also introduce both frequency- and spatial-domain PLIP-based image enhancement methods, including the PLIP Lees algorithm, PLIP bihistogram equalization, and the PLIP alpha rooting. Computer simulations and comparisons demonstrate that the new PLIP model allows the user to obtain improved enhancement performance by changing only the PLIP parameters, to yield better image fusion results by utilizing the PLIP addition or image multiplication, to represent a larger span of cases than the LIP and linear arithmetic cases by changing parameters, and to utilize and illustrate the logarithmic exponential operation for image fusion and enhancement.


systems, man and cybernetics | 2007

Logarithmic edge detection with applications

Eric J. Wharton; Karen Panetta; Sos S. Agaian

In real world machine vision problems, issues such as noise and variable scene illumination make edge and object detection difficult. There exists no universal edge detection method which works under all conditions. In this paper, we propose a logarithmic edge detection method. This achieves a higher level of scene illumination and noise independence. We present experimental results for this method, and compare results of the algorithm against several leading edge detection methods, such as Sobel and Canny. For an objective basis of comparison, we use Pratts Figure of Merit. We further demonstrate the application of the algorithm in conjunction with Edge Detection based Image Enhancement (EDIE), showing that the use of this edge detection algorithm results in better image enhancement, as quantified by the Logarithmic AME measure.


Mobile multimedia / image processing for military and security applications. Conference | 2006

A logarithmic measure of image enhancement

Eric J. Wharton; Sos S. Agaian; Karen Panetta

Image enhancement performance is currently judged subjectively, with no reliable manner of quantifying the results of an enhancement. Current quantitative measures rely on linear algorithms to determine contrast, leaving room for improvement. With the introduction of more complex enhancement algorithms, there is a need for an effective method of quantifying performance to select optimal parameters. In this paper, we present a logarithmic based image enhancement measure. We demonstrate its performance on real world images. The results will show the effectiveness of our measures to select optimal enhancement parameters for the enhancement algorithms.


systems, man and cybernetics | 2008

Human visual system based similarity metrics

Eric J. Wharton; Karen Panetta; Sos S. Agaian

Objective assessment of image quality is important for a number of image processing applications. Similarity metrics have been used for methods such as automating compression, automating watermarking, and benchmarking algorithm success. The goal of objective quality assessment is to quantify the quality of images in a manner consistent with human perception. For this reason, we introduce a novel image similarity metric based on the human visual system. The measures of enhancement (EME, AME, and LogAME) have been successfully used to quantify human quality perception for image enhancement. In this paper, we present a modified version of the Logarithmic AME which can successfully be used to quantify image similarity. We compare the quantitative assessments of this algorithm with those of the well known Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) on the basis of correlation with subjective human evaluations for a number of images.


electronic imaging | 2006

Comparative study of logarithmic enhancement algorithms with performance measure

Eric J. Wharton; Sos S. Agaian; Karen Panetta

Performance measures of image enhancement are traditionally subjective and have difficulty quantifying the improvement made by the algorithm. In this paper, we present the image enhancement measures and show how utilizing logarithmic arithmetic based addition, subtraction, and multiplication provides better results than previously used measures. In addition, for illustration of the performance of developed measures, we present a comprehensive study of several image enhancement algorithms from all three domains, including spatial, transform, and logarithmic algorithms.


international conference on acoustics, speech, and signal processing | 2007

Human Visual System Based Multi-Histogram Equalization for Non-Uniform Illumination and Shoadow Correction

Eric J. Wharton; Karen Panetta; Sos S. Agaian

Images that do not have uniform brightness pose a challenging problem for image enhancement systems. As histogram equalization has been successfully used to correct for uniform brightness problems, we propose a new histogram equalization method that utilizes human visual system based thresholding as well as logarithmic processing techniques. Whereas previous histogram equalization methods have been limited in their ability to enhance these images, we demonstrate the effectiveness of this new method by enhancing a range of images with shadowing effects and inconsistent illumination. The images shown include images captured professionally and with cell phone cameras. Comparison with other methods are presented.


data compression conference | 2008

Simultaneous Encryption/Compression of Images Using Alpha Rooting

Eric J. Wharton; Karen Panetta; Sos S. Agaian

Summary form only given. Significant work has been performed on encrypting images and compressing images as two separate problems, but traditional encryption techniques generally degrade the compression ratio. To circumvent these issues, two methods have been used. The first employs known encryption algorithms on compressed image data. The second develops compression algorithms which work well for encrypted data. The contribution of this paper is using alpha rooting to perform simultaneous compression and encryption. This achieves improved compression performance in terms of computational complexity and compression ratio. Results are shown for 2 of the well known benchmark images, using the well known JPEG image compression standard to demonstrate the effectiveness of alpha rooting for simultaneous encryption and compression.


Mobile multimedia/image processing for military and security applications. Conference | 2007

Human visual-system-based image enhancement

Eric J. Wharton; Karen Panetta; Sos S. Agaian

This paper presents a method of image enhancement using an adaptive thresholding method based on the human visual system. We utilize a number of different enhancement algorithms applied selectively to the different regions of an image to achieve a better overall enhancement than applying a single technique globally. The presented method is useful for images that contain various regions of improper illumination. It is also practical for correcting shadows. This thresholding system allows various enhancement algorithms to be used on different sections of the image based on the local visual characteristics. It further allows the parameters to be tuned differently for the specific regions, giving a more visually pleasing output image. We demonstrate the algorithm and present results for several high quality images as well as lower quality images such as those captured using a cell phone camera. We then compare and contrast our method to other state-of-the-art enhancement algorithms.


electronic imaging | 2007

Adaptive multi-histogram equalization using human vision thresholding

Eric J. Wharton; Karen Panetta; Sos S. Agaian

Image enhancement is the task of applying certain alterations to an input image such as to obtain a more visually pleasing image. The alteration usually requires interpretation and feedback from a human evaluator of the output resulting image. Therefore, image enhancement is considered a difficult task when attempting to automate the analysis process and eliminate the human intervention. Furthermore, images that do not have uniform brightness pose a challenging problem for image enhancement systems. Different kinds of histogram equalization techniques have been employed for enhancing images that have overall improper illumination or are over/under exposed. However, these techniques perform poorly for images that contain various regions of improper illumination or improper exposure. In this paper, we introduce new human vision model based automatic image enhancement techniques, multi-histogram equalization as well as local and adaptive algorithms. These enhancement algorithms address the previously mentioned shortcomings. We present a comparison of our results against many current local and adaptive histogram equalization methods. Computer simulations are presented showing that the proposed algorithms outperform the other algorithms in two important areas. First, they have better performance, both in terms of subjective and objective evaluations, then that currently used algorithms on a series of poorly illuminated images as well as images with uniform and non-uniform illumination, and images with improper exposure. Second, they better adapt to local features in an image, in comparison to histogram equalization methods which treat the images globally.

Collaboration


Dive into the Eric J. Wharton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sos S. Agaian

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge