Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashit Talukder is active.

Publication


Featured researches published by Ashit Talukder.


Autonomous Robots | 2005

Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation

Roberto Manduchi; Andres Castano; Ashit Talukder; Larry H. Matthies

Autonomous navigation in cross-country environments presents many new challenges with respect to more traditional, urban environments. The lack of highly structured components in the scene complicates the design of even basic functionalities such as obstacle detection. In addition to the geometric description of the scene, terrain typing is also an important component of the perceptual system. Recognizing the different classes of terrain and obstacles enables the path planner to choose the most efficient route toward the desired goal.This paper presents new sensor processing algorithms that are suitable for cross-country autonomous navigation. We consider two sensor systems that complement each other in an ideal sensor suite: a color stereo camera, and a single axis ladar. We propose an obstacle detection technique, based on stereo range measurements, that does not rely on typical structural assumption on the scene (such as the presence of a visible ground plane); a color-based classification system to label the detected obstacles according to a set of terrain classes; and an algorithm for the analysis of ladar data that allows one to discriminate between grass and obstacles (such as tree trunks or rocks), even when such obstacles are partially hidden in the grass. These algorithms have been developed and implemented by the Jet Propulsion Laboratory (JPL) as part of its involvement in a number of projects sponsored by the US Department of Defense, and have enabled safe autonomous navigation in high-vegetated, off-road terrain.


intelligent robots and systems | 2004

Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

Ashit Talukder; Larry H. Matthies

Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time camera based object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160/spl times/120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.


intelligent robots and systems | 2003

Real-time detection of moving objects in a dynamic scene from moving robotic vehicles

Ashit Talukder; Steve B. Goldberg; Larry H. Matthies; Adnan Ansar

Dynamic scene perception is currently limited to detection of moving objects from a static platform or scenes with flat backgrounds. We discuss novel real-time methods to segment moving objects in the motion field formed by a moving camera/robotic platform with mostly translational motion. Our solution does not explicitly require any egomotion knowledge, thereby making the solution applicable to mobile outdoor robot problems where no IMU information is available. We address two problems in dynamic scene perception on the move, first using only 2D monocular grayscale images, and second where 3D range information from stereo is also available. Our solution involves real-time optical flow computations, followed by optical flow field preprocessing to highlight moving object boundaries. In the case where range data from stereo is computed, a 3D optical flow field is estimated by combining range information with 2D optical flow estimates, followed by a similar 3D flow field preprocessing step. A segmentation of the flow field using fast flood filling then identifies every moving object in the scene with a unique label. This novel algorithm is expected to be the critical first step in robust recognition of moving vehicles and people from mobile outdoor robots, and therefore offers a robust solution to the problem of dynamic scene perception in the presence of certain kinds of robot motion. It is envisioned that our algorithm will benefit robot scene perception in urban environments for scientific, commercial and defense applications. Results of our real-time algorithm on a mobile robot in a scene with a single moving vehicle are presented.


Proceedings of SPIE | 2010

Fast, High-Resolution Terahertz Radar Imaging at 25 Meters

Ken B. Cooper; Robert J. Dengler; Nuria Llombart; Ashit Talukder; Anand V. Panangadan; Chris Peay; Imran Mehdi; Peter H. Siegel

We report improvements in the scanning speed and standoff range of an ultra-wide bandwidth terahertz (THz) imaging radar for person-borne concealed object detection. Fast beam scanning of the single-transceiver radar is accomplished by rapidly deflecting a flat, light-weight subreflector in a confocal Gregorian optical geometry. With RF back-end improvements also implemented, the radar imaging rate has increased by a factor of about 30 compared to that achieved previously in a 4 m standoff prototype instrument. In addition, a new 100 cm diameter ellipsoidal aluminum reflector yields beam spot diameters of approximately 1 cm over a 50×50 cm field of view at a range of 25 m, although some aberrations are observed that probably arise from misaligned optics. Through-clothes images of concealed pipes at 25 m range, acquired in 5 seconds, are presented, and the impact of reduced signal-to-noise from an even faster frame rate is analyzed. These results inform the requirements for eventually achieving sub-second or video-rate THz radar imaging.


intelligent robots and systems | 2002

Autonomous terrain characterisation and modelling for dynamic control of unmanned vehicles

Ashit Talukder; Roberto Manduchi; Rebecca Castano; Ken Owens; Larry H. Matthies; Andres Castano; Robert W. Hogg

We discuss techniques to predict the dynamic vehicle response to various natural obstacles. This method can then be used to adjust the vehicle dynamics to optimize performance (e.g. speed) while ensuring that the vehicle is not damaged. This capability opens up a new area of obstacle negotiation for UGVs, where the vehicle moves over certain obstacles, rather than avoiding them, thereby resulting in more effective achievement of objectives. Robust obstacle negotiation and vehicle dynamics prediction requires several key technologies that are discussed in this paper. We detect and segment (label) obstacles using a novel 3D obstacle algorithm. The material of each labelled obstacle (rock, vegetation, etc) is then determined using a texture or color classification scheme. Terrain load-bearing surface models are then constructed using vertical springs to model the compressibility and traversability of each obstacle in front of the vehicle. The terrain model is then combined with the vehicle suspension model to yield an estimate of the maximum safe velocity, and predict the vehicle dynamics as the vehicle follows a path. This end-to-end obstacle negotiation system is envisioned to be useful in optimized path planning and vehicle navigation in terrain conditions cluttered with vegetation, bushes, rocks, etc. Results on natural terrain with various natural materials are presented.


Optical Engineering | 1998

General methodology for simultaneous representation and discrimination of multiple object classes

Ashit Talukder; David Casasent

We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature (NLEF) extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem (discrimination) and for classification and pose estimation of two similar objects under 3-D aspect angle variations (representation and discrimination).


IEEE Sensors Journal | 2008

Clinical Evaluation of a Novel Interstitial Fluid Sensor System for Remote Continuous Alcohol Monitoring

Manju Venugopal; Kathryn Feuvrel; David Mongin; Shabbir Bambot; Mark L. Faupel; Anand V. Panangadan; Ashit Talukder; Rishi Pidva

This study describes the functioning of a novel sensor that measures the alcohol concentration in the interstitial fluid (ISF) of a human subject. ISF is extracted using vacuum pressure from micropores on the stratum corneum layer of the skin. The pores are created by focusing a near infrared laser on a layer of black die attached to the skin. This poration procedure is essentially painless. Clinical studies show that the sensor readings are correlated with alcohol levels in blood and collected using a breathalyzer. Alcohol could be detected in the subjects ISF within 15 min of the first oral intake of alcohol. Tests in a laboratory setup show that the sensor exhibits a linear response to alcohol concentrations in the range 0%-0.2%. The sensor is minimally invasive and alcohol monitoring using the sensor was shown to continue even when the subject is asleep. The sensor is viable for approximately three days after skin poration. The sensor is interfaced to a wireless health monitoring system that transfers sensor data over existing wide-area networks such as the Internet and a cellular phone network to enable real-time remote monitoring of subjects.


Transactions of the ASABE | 2001

Detection and segmentation of items in X-ray imagery

David Casasent; Ashit Talukder; Pamela M. Keagy; Thomas F. Schatzki

Processing of real–time X–ray images of randomly oriented and touching pistachio nuts for product inspection is considered. Processing to isolate individual nuts (segmentation) is emphasized. The processing consisted of a blob coloring algorithm, filters, and watershed transforms to segment touching nuts, and morphological processing to produce an image of only the nutmeat. Each operation is detailed and quantitative data for each are presented. These techniques are useful for many different product inspection problems in agriculture and other areas.


Neural Networks | 2001

A closed-form neural network for discriminatory feature extraction from high-dimensional data

Ashit Talukder; David Casasent

We consider a new neural network for data discrimination in pattern recognition applications. We refer to this as a maximum discriminating feature (MDF) neural network. Its weights are obtained in closed-form, thereby overcoming problems associated with other nonlinear neural networks. It uses neuron activation functions that are dynamically chosen based on the application. It is theoretically shown to provide nonlinear transforms of the input data that are more general than those provided by other nonlinear multilayer perceptron neural network and support-vector machine techniques for cases involving high-dimensional (image) inputs where training data are limited and the classes are not linearly separable. We experimentally verify this on synthetic examples.


Ultramicroscopy | 2002

Mapping the Mesoscale Interface Structure in Polycrystalline Materials

C.T Wu; Brent L. Adams; C.L Bauer; David Casasent; A Morawiec; Serhat Özdemir; Ashit Talukder

A new experimental approach to the quantitative characterization of polycrystalline microstructure by scanning electron microscopy is described. Combining automated electron backscattering diffraction with conventional scanning contrast imaging and with calibrated serial sectioning, the new method (mesoscale interface mapping system) recovers precision estimates of the 3D idealized aggregate function G(x). This function embodies a description of lattice phase and orientation (limiting resolution approximately 1 degree) at each point x (limiting spatial resolution approximately 100 nm), and, therefore, contains a complete mesoscale description of the interfacial network. The principal challenges of the method, achieving precise spatial registry between adjacent images and adequate distortion correction, are described. A description algorithm for control of the various components of the system is also provided.

Collaboration


Dive into the Ashit Talukder's collaboration.

Top Co-Authors

Avatar

David Casasent

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anand V. Panangadan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

L. Chandramouli

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cauligi S. Raghavendra

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Shuping Liu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Steve Monacos

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tanwir Sheikh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Rishi Pidva

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge