Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sailes K. Sengupta is active.

Publication


Featured researches published by Sailes K. Sengupta.


asilomar conference on signals, systems and computers | 1993

Detecting buried objects by fusing dual-band infrared images

Gregory A. Clark; Sailes K. Sengupta; Michael R. Buhl; Robert J. Sherwood; Paul C. Schaich; N. Bull; Ronald J. Kane; Marvin J. Barth; David J. Fields; Michael R. Carter

The authors have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete, the paper focuses on the fusion of two-band infrared images. The authors use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infrared images and evaluation of the techniques using two real data sets.<<ETX>>


Proceedings of SPIE | 1993

Sensor feature fusion for detecting buried objects

Gregory A. Clark; Sailes K. Sengupta; Robert J. Sherwood; Jose D. Hernandez; Michael R. Buhl; Paul C. Schaich; Ronald J. Kane; Marvin J. Barth; Nancy DelGrande

Given multiple registered images of the earths surface from dual-band infrared sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two infrared sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised learning pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: preprocessing, feature extraction, feature selection, and classification. We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.


IEEE Transactions on Nuclear Science | 1996

Automatic image analysis for detecting and quantifying gamma-ray sources in coded-aperture images

Paul C. Schaich; Gregory A. Clark; Sailes K. Sengupta; Klaus-Peter Ziock

We report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the probabilistic neural network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known. The novelty of the work lies in the creative combination of algorithms and the successful application of the algorithms to real images of gamma-ray sources.


asilomar conference on signals, systems and computers | 1994

Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images

Paul C. Schaich; Gregory A. Clark; Sailes K. Sengupta; Klaus-Peter Ziock

We report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is plot known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the probabilistic neural network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.<<ETX>>


Archive | 1992

Automated pattern analysis in petroleum exploration

Ibrahim Palaz; Sailes K. Sengupta

This book includes topics on: pattern recognition (PR), image analysis (IA), artificial intelligence (AI), scientific and engineering disciplines, and petroleum exploration. The fields in which PR, IA, and AI are applied in this book are not limited to sand grand shape analysis and well test analysis.


Substance Identification Technologies | 1994

Data fusion for the detection of buried land mines

Gregory A. Clark; Sailes K. Sengupta; Paul C. Schaich; Robert J. Sherwood; Michael R. Buhl; Jose D. Hernandez; Ronald J. Kane; Marvin J. Barth; David J. Fields; Michael R. Carter

We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, IR imagery, and ground penetrating radar, have been used to acquire data on a number of buried mines and mine surrogates. We present this data along with a discussion of our application of sensor fusion techniques for this particular detection problem. We describe our data fusion architecture and discuss the some relevant results of these classification methods.


asilomar conference on signals, systems and computers | 1992

Computer vision and sensor fusion for detecting buried objects

Gregory A. Clark; Jose E. Hernandez; Sailes K. Sengupta; Robert J. Sherwood; Paul C. Schaich; Michael R. Buhl; Ronald J. Kane; Marvin J. Barth; Nancy DelGrande

Given multiple images of the Earths surface from dual-band infrared sensors, a system that fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites is presented. Supervised learning pattern classifiers (including neural networks) are used. Results of experiments to detect buried land mines from real data are given, and the usefulness of fusing information from multiple sensor types is evaluated. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved problem of detecting buried land mines from an airborne standoff platform.<<ETX>>


international geoscience and remote sensing symposium | 2004

Phase-based road detection in multi-source images

Sailes K. Sengupta; Aseneth S. Lopez; James M. Brase; David W. Paglieroni

The problem of robust automatic road detection in remotely sensed images is complicated by the fact that the sensor, spatial resolution, acquisition conditions, road width, road orientation and road material composition can all vary. A novel technique for detecting road pixels in multisource remotely sensed images based on the phase (i.e., orientation or directional) information in edge pixels is described. A very dense map of edges extracted from the image is separated into channels, each containing edge pixels whose phases lie within a different range of orientations. The edge map associated with each channel is de-cluttered. A map of road pixels is formed by re-combining the de-cluttered channels into a composite edge image which is itself then separately de-cluttered. Road detection results are provided for DigitalGlobe and TeiraServerUSA images. Road representations suitable for various applications are then discussed


electronic imaging | 2003

Use of machine vision techniques to detect human settlements in satellite images

Chandrika Kamath; Sailes K. Sengupta; Douglas N. Poland; John A. H. Futterman

The automated production of maps of human settlement from recent satellite images is essential to studies of urbanization, population movement, and the like. The spectral and spatial resolution of such imagery is often high enough to successfully apply computer vision techniques. However, vast amounts of data have to be processed quickly. In this paper, we propose an approach that processes the data in several different stages. At each stage, using features appropriate to that stage, we identify the portion of the data likely to contain information relevant to the identification of human settlements. This data is used as input to the next stage of processing. Since the size of the data has reduced, we can now use more complex features in this next stage. These features can be more representative of human settlements, and also more time consuming to extract from the image data. Such a hierarchical approach enables us to process large amounts of data in a reasonable time, while maintaining the accuracy of human settlement identification. We illustrate our multi-stage approach using IKONOS 4-band and panchromatic images, and compare it with the straight-forward processing of the entire image.


High-power lasers and applications | 2003

Detecting human settlements in satellite images

Sailes K. Sengupta; Chandrika Kamath; Douglas N. Poland; John A. H. Futterman

The automated production of maps of human settlement from recent satellite images is essential to detailed studies of urbanization, population movement, and the like. Commercial satellite imagery is becoming available with sufficient spectral and spatial resolution to apply computer vision techniques previously considered only for laboratory (high resolution, low noise) images. In this paper we attempt to extract human settlement from IKONOS 4-band and panchromatic images using spectral segmentation together with a form of generalized second-order statistics and detection of edges, corners, and other candidate human-made features in the imagery.

Collaboration


Dive into the Sailes K. Sengupta's collaboration.

Top Co-Authors

Avatar

Gregory A. Clark

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul C. Schaich

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robert J. Sherwood

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Marvin J. Barth

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael R. Buhl

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ronald J. Kane

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David J. Fields

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Douglas N. Poland

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Aseneth S. Lopez

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Chandrika Kamath

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge