Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atam P. Dhawan is active.

Publication


Featured researches published by Atam P. Dhawan.


IEEE Transactions on Medical Imaging | 1986

Enhancement of Mammographic Features by Optimal Adaptive Neighborhood Image Processing

Atam P. Dhawan; Gianluca Buelloni; Richard Gordon

X-ray mammography is the only breast cancer detection technique presently available with proven efficacy. Mammographic detection of early breast cancer requires optimal radiological or image processing techniques. We present an image processing approach based on adaptive neighborhood processing with a new set of contrast enhancement functions to enhance mammographic features. This procedure brings out the features in the image with little or no enhancement of the noise. We also find that adaptive neighborhoods with surrounds whose width is a constant difference from the center yield improved enhancement over adaptive neighborhoods with a constant ratio of surround to center neighborhood widths.


Applied Optics | 1985

Algorithms for limited-view computed tomography: an annotated bibliography and a challenge.

Rangaraj M. Rangayyan; Atam P. Dhawan; Richard Gordon

In many applications of computed tomography, it may not be possible to acquire projection data at all angles, as required by the most commonly used algorithm of convolution backprojection. In such a limited-data situation, we face an ill-posed problem in attempting to reconstruct an image from an incomplete set of projections. Many techniques have been proposed to tackle this situation, employing diverse theories such as signal recovery, image restoration, constrained deconvolution, and constrained optimization, as well as novel schemes such as iterative object-dependent algorithms incorporating a priori knowledge and use of multispectral radiation. We present an overview of such techniques and offer a challenge to all readers to reconstruct images from a set of limited-view data provided here.


Computer Methods and Programs in Biomedicine | 1988

Mammographic feature enhancement by computerized image processing.

Atam P. Dhawan; Eric Le Royer

Mammographic detection of early breast cancer from X-ray film mammograms requires optimal radiological or image processing techniques. We present an image processing approach based on feature adaptive neighborhood processing with a tunable contrast-enhancement function to enhance mammographic features. This procedure brings out the features in the image with little or no enhancement of the noise. Results show that the proposed technique is intelligently tunable to the requirements of enhancement of specific mammographic features such as microcalcifications, soft-tissue characteristics, etc.


IEEE Transactions on Medical Imaging | 1988

A multigrid expectation maximization reconstruction algorithm for positron emission tomography

M. V. Ranganath; Atam P. Dhawan; Nizar Mullani

The problem of reconstruction in positron emission tomography (PET) is basically estimating the number of photon pairs emitted from the source. Using the concept of the maximum-likelihood (ML) algorithm, the problem of reconstruction is reduced to determining an estimate of the emitter density that maximizes the probability of observing the actual detector count data over all possible emitter density distributions. A solution using this type of expectation maximization (EM) algorithm with a fixed grid size is severely handicapped by the slow convergence rate, the large computation time, and the nonuniform correction efficiency of each iteration, which makes the algorithm very sensitive to the image pattern. An efficient knowledge-based multigrid reconstruction algorithm based on the ML approach is presented to overcome these problems.


Computerized Medical Imaging and Graphics | 1992

Segmentation of images of skin lesions using color and texture information of surface pigmentation.

Atam P. Dhawan; Anne Sim

Image segmentation algorithms extract regions on the basis of similarity of a predefined image feature such as gray-level value. In many applications, images that exhibit a variety of structure or texture cannot be adequately segmented by gray-level values alone. Additional features related to the structure of the image are needed to segment such images. Images of skin lesions exhibit significant variations in color hues as well as geometrical appearance of local surface structure. For example, images of cutaneous malignant melanoma exhibit a rich combination of color and geometrical structure of pigmentation. In these images, the local repetition of the geometrical surface structure provides the basis for the appearance of a texture pattern in the neighborhood region. For obtaining meaningful segmentation of images of skin lesions, a multichannel segmentation algorithm is proposed in this paper which uses both gray-level intensity and texture-based features for region extraction. The intensity-based segmentation is obtained using the modified pyramid-based region extraction algorithm. The texture-based segmentation is obtained by a bilevel shifted-window processing algorithm that uses new generalized co-occurrence matrices. The results of individual segmentations obtained from different channels, representing the complete set of color and texture information, are analyzed using heuristic merging rules to obtain the final color- and texture-based segmentation. Simulated as well as real images of skin lesions, representing various color shades and textures, have been processed. We show that using contrast link information in the pyramid-based region extraction process, and using the absolute magnitude and directional information in the generalized co-occurrence matrices (GCM) method, significant improvement in image segmentation can be obtained. Further, by incorporating the merging rules better results are obtained than those obtained using the gray-level intensity feature alone.


IEEE Transactions on Biomedical Engineering | 1995

Three-dimensional anatomical model-based segmentation of MR brain images through principal axes registration

Louis K. Arata; Atam P. Dhawan; Joseph P. Broderick; Mary F. Gaskil-Shipley; Alejandro V. Levy; Nora D. Volkow

Model-based segmentation and analysis of brain images depends on anatomical knowledge which may be derived from conventional atlases. Classical anatomical atlases are based on the rigid spatial distribution provided by a single cadaver. Their use to segment internal anatomical brain structures in a high-resolution MR brain image does not provide any knowledge about the subject variability, and therefore they are not very efficient in analysis. The authors present a method to develop three-dimensional computerized composite models of brain structures to build a computerized anatomical atlas. The composite models are developed using the real MR brain images of human subjects which are registered through the principal axes transformation. The composite models provide probabilistic spatial distributions, which represent the variability of brain structures and can be easily updated for additional subjects. The authors demonstrate the use of such a composite model of ventricular structure to help segmentation of the ventricles and cerebrospinal fluid of MR brain images. Here, a composite model of ventricles using a set of 22 human subjects is developed and used in a model-based segmentation of ventricles, sulci, and white matter lesions. To illustrate the clinical usefulness, automatic volumetric measurements on ventricular size and cortical atrophy for an additional eight alcoholics and 10 normal subjects were made. The volumetric quantitative results indicated regional brain atrophy in chronic alcoholics.<<ETX>>


Computer Methods and Programs in Biomedicine | 2003

Classification of melanoma using tree structured wavelet transforms.

Sachin V. Patwardhan; Atam P. Dhawan; Patricia Relue

This paper presents a wavelet transform based tree structure model developed and evaluated for the classification of skin lesion images into melanoma and dysplastic nevus. The tree structure model utilizes a semantic representation of the spatial-frequency information contained in the skin lesion images including textural information. Results show that the presented method is effective in discriminating melanoma from dysplastic nevus. The results are also compared with those obtained using another method of developing tree structures utilizing the maximum channel energy criteria with a fixed energy ratio threshold.


international conference of the ieee engineering in medicine and biology society | 1993

Artificial neural network based classification of mammographic microcalcifications using image structure and cluster features

Yateen S. Chitre; Atam P. Dhawan; Myron Moskowitz

Breast cancer is the leading cause of death among women. Mammography is the only effective and viable technique to detect breast cancer, sometimes before the cancer becomes invasive. About 30% to 50% of breast cancers demonstrate clustered microcalcifications. We investigate the potential of using second-order histogram textural features for their correlation with malignancy. A combination of image structure features extracted from the second histogram was used with binary cluster features extracted from segmented calcifications. Several architectures of neural networks were used for analyzing the features. The neural network yielded good results for the classification of hard-to-diagnose cases of mammographic microcalcification into benign malignant categories using the selected set of features.<<ETX>>


Computerized Medical Imaging and Graphics | 2000

Multi-level adaptive segmentation of multi-parameter MR brain images.

A. Zavaljevski; Atam P. Dhawan; M. Gaskil; W. Ball; J.D. Johnson

MR brain image segmentation into several tissue classes is of significant interest to visualize and quantify individual anatomical structures. Traditionally, the segmentation is performed manually in a clinical environment that is operator dependent and may be difficult to reproduce. Though several algorithms have been investigated in the literature for computerized automatic segmentation of MR brain images, they are usually targeted to classify image into a limited number of classes such as white matter, gray matter, cerebrospinal fluid and specific lesions. We present a novel model-based method for the automatic segmentation and classification of multi-parameter MR brain images into a larger number of tissue classes of interest. Our model employs 15 brain tissue classes instead of the commonly used set of four classes, which were of clinical interest to neuroradiologists for following-up with patients suffering from cerebrovascular deficiency (CVD) and/or stroke. The model approximates the spatial distribution of tissue classes by a Gauss Markov random field and uses the maximum likelihood method to estimate the class probabilities and transitional probabilities for each pixel of the image. Multi-parameter MR brain images with T(1), T(2), proton density, Gd+T(1), and perfusion imaging were used in segmentation and classification. In the development of the segmentation model, true class-membership of measured parameters was determined from manual segmentation of a set of normal and pathologic brain images by a team of neuroradiologists. The manual segmentation was performed using a human-computer interface specifically designed for pixel-by-pixel segmentation of brain images. The registration of corresponding images from different brains was accomplished using an elastic transformation. The presented segmentation method uses the multi-parameter model in adaptive segmentation of brain images on a pixel-by-pixel basis. The method was evaluated on a set of multi-parameter MR brain images of a twelve-year old patient 48h after suffering a stroke. The results of classification as compared to the manual segmentation of the same data show the efficacy and accuracy of the presented methods as well as its capability to create and learn new tissue classes.


Pattern Recognition | 1999

M-band wavelet discrimination of natural textures

Yateen S. Chitre; Atam P. Dhawan

Abstract The M-band wavelet decomposition, a direct generalization of the standard 2-band wavelet decomposition has been applied to the problem of discriminating natural textures of varying sizes. Regular, M-band filter banks were designed using a genetic algorithm search strategy over the Householder parameter space of M-band wavelets. An exhaustive M-band decomposition was performed on 20 natural textures and energy features were extracted for each decomposed sub-band. The discrimination ability of the extracted features was compared for values of M=2, 3 and 4. A nearest neighbor algorithm was used to classify a test set of 700 images to an accuracy of 99.5%. The performance was compared with a complete decomposition and decomposition using an irregular M-band filter bank. Statistical tests were used to evaluate the average performance of features extracted from the decomposed sub-bands.

Collaboration


Dive into the Atam P. Dhawan's collaboration.

Top Co-Authors

Avatar

Sachin V. Patwardhan

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alok Sarwal

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar

Brian D'Alessandro

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louis K. Arata

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prashanth Kini

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar

Song Wang

New Jersey Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge