Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anoop K. Bhattacharjya is active.

Publication


Featured researches published by Anoop K. Bhattacharjya.


international conference on image processing | 1999

Data embedding in text for a copier system

Anoop K. Bhattacharjya; Hakan Ancin

In this paper, we present a scheme for embedding data in copies (color or monochrome) of predominantly text pages that may also contain color images or graphics. Embedding data imperceptibly in documents or images is a key ingredient of watermarking and data hiding schemes. It is comparatively easy to hide a signal in natural images since the human visual system is less sensitive to signals embedded in noisy image regions containing high spatial frequencies. In other instances, e.g. simple graphics or monochrome text documents, additional constraints need to be satisfied to embed signals imperceptibly. Data may be embedded imperceptibly in printed text by altering some measurable property of a font such as position of a character or font size. This scheme however, is not very useful for embedding data in copies of text pages, as that would require accurate text segmentation and possibly optical character recognition, both of which would deteriorate the error rate performance of the data-embedding system considerably. Similarly, other schemes that alter pixels on text boundaries have poor performance due to boundary-detection uncertainties introduced by scanner noise, sampling and blurring. The scheme presented in this paper ameliorates the above problems by using a text-region based embedding approach. Since the bulk of documents reproduced today contain black on white text, this data-embedding scheme can form a print-level layer in applications such as copy tracking and annotation.


Journal of Microscopy | 1994

Algorithms for automated characterization of cell populations in thick specimens from 3-D confocal fluorescence microscopy data

Badrinath Roysam; Hakan Ancin; Anoop K. Bhattacharjya; M. A. Chisti; R. Seegal; James N. Turner

Methods are presented for the automated, quantitative and three‐dimensional (3‐D) analysis of cell populations in thick, essentially intact tissue sections while maintaining intercell spatial relationships. This analysis replaces current manual methods which are tedious and subjective.


IEEE Transactions on Neural Networks | 1994

Joint solution of low, intermediate, and high-level vision tasks by evolutionary optimization: Application to computer vision at low SNR

Anoop K. Bhattacharjya; Badrinath Roysam

Methods for conducting model-based computer vision from low-SNR (=/<1 dB) image data are presented. Conventional algorithms break down in this regime due to a cascading of noise artifacts, and inconsistencies arising from the lack of optimal interaction between high- and low-level processing. These problems are addressed by solving low-level problems such as intensity estimation, segmentation, and boundary estimation jointly (synergistically) with intermediate-level problems such as the estimation of position, magnification, and orientation, and high-level problems such as object identification and scene interpretation. This is achieved by formulating a single objective function that incorporates all the data and object models, and a hierarchy of constraints in a Bayesian framework. All image-processing operations, including those that exploit the low and high-level variables to satisfy multi-level pattern constraints, result directly from a parallel multi-trajectory global optimization algorithm. Experiments with simulated low-count (7-9 photons/pixel) 2-D Poisson images demonstrate that compared to non-joint methods, a joint solution not only results in more reliable scene interpretation, but also a superior estimation of low-level imaging variables. Typically, most object parameters are estimated to within a 5% accuracy even with overlap and partial occlusion.


Journal of Electronic Imaging | 1999

New void-and-cluster method for improved halftone uniformity

Hakan Ancin; Anoop K. Bhattacharjya; Joseph Shu

Dithering quality of the void and cluster algorithm suffers due to fixed filter width and absence of a well-defined criterion for selecting among equally likely candidates during the computation of the locations of the tightest clusters and largest voids. Various researchers have addressed the issue of fixed filter width by adaptively changing the width with experimentally determined values. This paper addresses both aforementioned issues by using a Voronoi tessellation and three criteria to select among equally likely candidates. The algorithm uses vertices of the Voronoi tessellation, and the areas of the Voronoi regions to determine the locations of the largest voids and the tightest clusters. During void and cluster operations there may be multiple equally likely candidates for the locations of the largest voids and the tightest clusters. The selection among equally likely candidates is important when the number of candidates is larger than the number of dots for a given quantization level, or if there are candidates within the local neighborhood of one of the candidate points, or if a candidate’s Voronoi region shares one or more vertices with another candidate’s Voronoi region. Use of these methods leads to more uniform dot patterns for light and dark tones. The improved algorithm is compared with other dithering methods based on power spectrum characteristics and visual evaluation.


IEEE Transactions on Neural Networks | 1992

Hierarchically structured unit-simplex transformations for parallel distributed optimization problems

Badrinath Roysam; Anoop K. Bhattacharjya

A stable deterministic approach is presented for incorporating unit-simplex constraints based on a hierarchical deformable-template structure. This approach (i) guarantees strict confinement of the search to the unit-simplex constraint set without introducing unwanted constraints; (ii) leads to a hierarchical, rather than a global, network interconnection structure; (iii) allows multiresolution processing; and (iv) allows easy closed-form incorporation of certain other inherently global constraints, such as general recursive symmetries. Selected examples are presented which illustrate and demonstrate large-scale application of the template method.


Signal Processing | 1992

A genetic algorithm for intelligent imaging from quantum-limited data

Anoop K. Bhattacharjya; Douglas E. Becker; Badrinath Roysam

Abstract A parallel genetic algorithm is presented for 2-D object recognition and simulataneous estimation of object position, magnification and orientation from quantum-limited sensor data. Traditional approaches to this problem are based on matching a concise set of features (boundaries, corners, moments, etc.) from the sensor data to a corresponding set of model features. These approaches break down at low SNR due to a deluge of artifacts among the data features, and inconsistencies arising from the lack of optimal interaction between high-level and low-level vision processes. As a first step towards overcoming the above hurdles, this paper presents a drastic departure from conventional vision-based approaches that (i) avoids the computation of features from noisy data, and (ii) uses a synergistic interaction of high-level and low-level vision processes to avoid inconsistencies. The combined vision problem is posed as a large-scale global optimization over a single objective function that directly involves the sensor data, the noise model and object templates. The optimization is accomplished using a genetic algorithm that runs on a parallel computer with 40 Transputers. Experimental results are presented which demonstrate robust operation and high accuracy with quantum-limited (5–10 events/pixel) data.


SPIE/IS&T 1992 Symposium on Electronic Imaging: Science and Technology | 1992

Unsupervised noise removal algorithms for 3-D confocal fluorescence microscopy

Badrinath Roysam; Anoop K. Bhattacharjya; Chukka Srinivas; Donald H. Szarowski; James N. Turner

Fast algorithms are presented for effective removal of the noise artifact in 3-D confocal fluorescence microscopy images of extended spatial objects such as neurons. The algorithms are unsupervised in the sense that they automatically estimate and adapt to the spatially and temporally varying noise level in the microscopy data. An important feature of the algorithms is the fact that a 3-D segmentation of the field emerges jointly with the intensity estimate. The role of the segmentation is to limit any smoothing to the interiors of regions and hence avoid the blurring that is associated with conventional noise removal algorithms. Fast computation is achieved by parallel computation methods, rather than by algorithmic or modelling compromises. The noise-removal proceeds iteratively, starting from a set of approximate user- supplied, or default initial guesses of the underlying random process parameters. An expectation maximization algorithm is used to obtain a more precise characterization of these parameters, that are then input to a hierarchical estimation algorithm. This algorithm computes a joint solution of the related problems corresponding to intensity estimation, segmentation, and boundary-surface estimation subject to a combination of stochastic priors and syntactic pattern constraints. Three-dimensional stereoscopic renderings of processed 3-D images of murine hippocampal neurons are presented to demonstrate the effectiveness of the method. The processed images exhibit increased contrast and significant smoothing and reduction of the background intensity while avoiding any blurring of the neuronal structures.


Micron and Microscopica Acta | 1992

Unsupervised noise removal algorithms for three-dimensional confocal fluorescence microscopy

Badrinath Roysam; Anoop K. Bhattacharjya; Chukka Srinivas; Donald H. Szarowski; James N. Turner

Abstract Algorithms are presented for effective suppression of the quantum noise artifact that is inherent to three-dimensional confocal fluorescence microscopy images of extended spatial objects such as neurons. The specific advances embodied in these algorithms are as follows: (i) they incorporate an automatic and pattern-constrained three-dimensional segmentation of the image field, and use it to limit any smoothing to the interiors of specified image regions and hence avoid the blurring that is inevitably associated with conventional noise removal algorithms; (ii) they are ‘unsupervised’ in the sense that they automatically estimate and adapt to the unknown spatially and temporally varying noise level in the microscopy data. Fast computation is achieved by parallel computation methods, rather than by algorithmic or modelling compromises. The quantum noise artifact is modelled using a mixture of spatially non-homogeneous Poisson point processes. The intensity of each component process is constrained to lie in specific intervals. A set of segmentation and edge-site variables are used to determine the intensity of the mixture process. Using this model, the noise-removal process is formulated as the joint optimal estimation of the segmentation labels, edge-sites and intensity of the mixture Poisson point process, subject to a combination of stochastic priors and syntactic pattern constraints. The computations proceed iteratively, starting from a set of approximate user-supplied, or default initial guesses of the underlying random process parameters. An Expectation Maximization algorithm is used to obtain a more precise characterization of these parameters, that are then input to a joint estimation algorithm. Stereoscopic renderings of processed three-dimensional images of murine hippocampal neurons are presented to demonstrate the effectiveness of the method. The processed images exhibit increased contrast and significant smoothing and reduction of the background intensity while avoiding any blurring of the foreground neuronal structures.


Microscopy and Microanalysis | 1999

Application and Quantitative Validation of Computer-Automated Three-Dimensional Counting of Cell Nuclei.

William Shain; Soraya Kayali; Donald H. Szarowski; Margaret I. Davis-Cox; Hakan Ancin; Anoop K. Bhattacharjya; Badrinath Roysam; James N. Turner

: This study provides a quantitative validation of qualitative automated three-dimensional (3-D) analysis methods reported earlier. It demonstrates the applicability and quantitative accuracy of our method to detect, characterize, and count Feulgen stained cell nuclei in two tissues (hippocampus and testes). These methods can provide important insights into the interpretation of biological, pharmacological, pathological, and toxicological events. A laser-scanned confocal light microscope was used to record 3-D images in which our algorithms automatically identified individual nuclei from the optical sections given an estimate of minimum nuclear size. The hippocampal data sets were also manually counted independently by five trained observers using the STERECON 3-D image reconstruction system. The automated and manual counts were compared. The computer counts were lower ( approximately 14%) than the manual counts, mainly because the algorithms counted a nucleus only if it was present in five consecutive optical sections but the human counters included nuclei that were in fewer optical sections. A nucleus-by-nucleus comparison of the manual and automated counts verified that the automated analysis was accurate and reproducible, and permitted additional quantitative analyses not available from manual methods. The algorithms also identified subpopulations of nuclei within the hippocampal samples, and haploid and diploid nuclei in the testes. Our methods were shown to be repeatable, accurate, and more consistent than manual counting. Nuclei in regions of high (hippocampal pyramidal layer) and low (extrapyramidal layer) density were distinguished with equal ease. Haploid and diploid nuclei were distinguished in the testes, demonstrating that our automated method may be useful for ploidy analysis. The results presented here on hippocampus and testis are consistent with other qualitative results from the liver and from immunohistochemically labeled substantia nigra, demonstrating the applicability of our software across tissues and preparation methods.


Proceedings of SPIE | 1998

Improving void-and-cluster for better halftone uniformity

Hakan Ancin; Anoop K. Bhattacharjya; Joseph Shu

Dithering quality of the void and cluster algorithm suffers due to fixed filter width and absence of a well-defined criterion for selecting among equally-likely candidates during the computation of the locations of the tightest clusters and largest voids. Various researchers have addressed the issue of fixed filter width by adaptively changing the width with experimentally determined values. This paper addresses both aforementioned issues by using a Voronoi tessellation and two criteria to select among equally likely candidates. The algorithm uses vertices of the Voronoi tessellation, and the areas of the Voronoi regions to determine the locations of the largest voids and the tightest clusters. During void and cluster operations there may be multiple equally-likely candidates for the locations of the largest voids and tightest clusters. The selection among equally-likely candidates is important when the number of candidates is larger than the number of dots for a given quantization level, or if there are candidates within the local neighborhood of one of the candidate points, or if a candidates Voronoi region shares one or more vertices with another candidates Voronoi region. Use of these methods lead to more uniform dot patterns for light and dark tones. The improved algorithm is compared with other dithering methods.

Collaboration


Dive into the Anoop K. Bhattacharjya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hakan Ancin

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

James N. Turner

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald H. Szarowski

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas E. Becker

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

M. A. Chisti

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar

R. Seegal

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar

Soraya Kayali

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge