Amol Ambardekar
University of Nevada, Reno
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amol Ambardekar.
Optics Letters | 2005
Amol Ambardekar; Yong-qing Li
We report on optical levitation and manipulation of microscopic particles that are stuck on a glass surface with pulsed optical tweezers. An infrared pulse laser at 1.06 microm was used to generate a large gradient force (up to 10(-9) N) within a short duration (approximately 45 micros) that overcomes the adhesive interaction between the particles and the glass surface. Then a low-power continuous-wave diode laser at 785 nm was used to capture and manipulate the levitated particle. We have demonstrated that both stuck dielectric and biological micrometer-sized particles, including polystyrene beads, yeast cells, and Bacillus cereus bacteria, can be levitated and manipulated with this technique. We measured the single-pulse levitation efficiency for 2.0 microm polystyrene beads as a function of the pulse energy and of the axial displacement from the stuck particle to the pulsed laser focus, which was as high as 88%.
Eurasip Journal on Image and Video Processing | 2014
Amol Ambardekar; Mircea Nicolescu; George Bebis; Monica N. Nicolescu
AbstractVideo surveillance has significant application prospects such as security, law enforcement, and traffic monitoring. Visual traffic surveillance using computer vision techniques can be non-invasive, cost effective, and automated. Detecting and recognizing the objects in a video is an important part of many video surveillance systems which can help in tracking of the detected objects and gathering important information. In case of traffic video surveillance, vehicle detection and classification is important as it can help in traffic control and gathering of traffic statistics that can be used in intelligent transportation systems. Vehicle classification poses a difficult problem as vehicles have high intra-class variation and relatively low inter-class variation. In this work, we investigate five different object recognition techniques: PCA + DFVS, PCA + DIVS, PCA + SVM, LDA, and constellation-based modeling applied to the problem of vehicle classification. We also compare them with the state-of-the-art techniques in vehicle classification. In case of the PCA-based approaches, we extend face detection using a PCA approach for the problem of vehicle classification to carry out multi-class classification. We also implement constellation model-based approach that uses the dense representation of scale-invariant feature transform (SIFT) features as presented in the work of Ma and Grimson (Edge-based rich representation for vehicle classification. Paper presented at the international conference on computer vision, 2006, pp. 1185–1192) with slight modification. We consider three classes: sedans, vans, and taxis, and record classification accuracy as high as 99.25% in case of cars vs vans and 97.57% in case of sedans vs taxis. We also present a fusion approach that uses both PCA + DFVS and PCA + DIVS and achieves a classification accuracy of 96.42% in case of sedans vs vans vs taxis.MSC68T10; 68T45; 68U10
IEEE Transactions on Autonomous Mental Development | 2012
Richard Kelley; Alireza Tavakkoli; Christopher King; Amol Ambardekar; Mircea Nicolescu
One of the foundations of social interaction among humans is the ability to correctly identify interactions and infer the intentions of others. To build robots that reliably function in the human social world, we must develop models that robots can use to mimic the intent recognition skills found in humans. We propose a framework that uses contextual information in the form of object affordances and object state to improve the performance of an underlying intent recognition system. This system represents objects and their affordances using a directed graph that is automatically extracted from a large corpus of natural language text. We validate our approach on a physical robot that classifies intentions in a number of scenarios.
advances in computer-human interaction | 2009
Amol Ambardekar; Mircea Nicolescu; Sergiu M. Dascalu
As cameras and storage devices have become cheaper, the number of video surveillance systems has also increased. Video surveillance was (and mostly is) done by human operators on a need-to-know basis. The advent of new algorithms from the computer vision community, and increased computational power offered by new CPUs have shown a strong possibility of automating this task. Different approaches have been proposed by computer scientists to solve the difficult problem of content recognition from video data. They use many different videos to prove their usefulness and accuracy. A careful comparison and evaluation needs to be done to find the most suitable method under given conditions. To compare the results given by video surveillance applications, the ground truth needs to be established. In the case of computer vision, the ground truth needs to be provided by humans, making it one of the most time-consuming tasks in the evaluation process. This paper presents a tool (GTVT) that allows the user to establish the ground truth for a given video. GTVT presents a user-friendly interface to perform the cumbersome task of ground truth establishment and verification.
international conference & workshop on emerging trends in technology | 2011
H. B. Kekre; V. A. Bharadi; V. I. Singh; Amol Ambardekar
Palmprints are one of the oldest biometric traits used by mankind. It is highly universal and moderate user co-operation is required in implemented system. Palmprints are rich in texture information which can be used classification purpose. Wavelets are very good in extracting localized texture information. In this paper a new and faster type of wavelets called kekres wavelets are used for extracting feature vector from palmprints. Multilevel decomposition is performed and feature vectors are matched using Euclidian distance and Relative Energy Entropy. The results indicate that kekres wavelets are viable option for extracting texture information from palmprints and provide good accuracy with faster performance.
international conference & workshop on emerging trends in technology | 2010
H. B. Kekre; V. A. Bharadi; S. Gupta; Amol Ambardekar; V. B. Kulkarni
Handwritten signatures are one of the oldest biometric traits for human authorization and authentication of documents. Majority of commercial application area deal with static form of signature. In this paper we present a method for off-line signature recognition. We have used morphological dilation on signature template for measurement of the pixel variance and hence the inter class and intra class variations in the signature. The proposed feature extraction mechanism is fast enough so that it can be applied for on-line signature verification also.
international symposium on visual computing | 2007
Alireza Tavakkoli; Amol Ambardekar; Mircea Nicolescu
Detecting regions of interest in video sequences is one of the most important tasks in many high level video processing applications. In this paper a novel approach based on Support Vector Data Description (SVDD) is presented. The method detects foreground regions in videos with quasi-stationary backgrounds. The SVDD is a technique used in analytically describing the data from a set of population samples. The training of Support Vector Machines (SVMs) in general, and SVDD in particular requires a Lagrange optimization which is computationally intensive. We propose to use a genetic approach to solve the Lagrange optimization problem. The Genetic Algorithm (GA) starts with the initial guess and solves the optimization problem iteratively. Moreover, we expect to get accurate results with less cost than the Sequential Minimal Optimization (SMO) technique.
conference on lasers and electro-optics | 2005
Amol Ambardekar; Yong-qing Li
We report on optical levitation and manipulation of microscopic particles that are stuck on a glass surface with a pulsed optical tweezers. Both the stuck dielectric beads and biological cells are demonstrated to be levitated.
Plan, Activity, and Intent Recognition#R##N#Theory and Practice | 2014
Richard Kelley; Alireza Tavakkoli; Christopher King; Amol Ambardekar; Liesl Wigand; Monica N. Nicolescu; Mircea Nicolescu
Abstract For robots to operate in social environments, they must be able to recognize human intentions. In the context of social robotics, intent recognition must rely on imperfect sensors, such as depth cameras, and must operate in real time. This chapter introduces several approaches for recognizing intentions by physical robots. We show how such systems can use sensors, such as the Microsoft Kinect, as well as temporal and contextual information obtained from resources such as Wikipedia.
Journal of Electronic Imaging | 2013
Amol Ambardekar; Mircea Nicolescu; George Bebis; Monica N. Nicolescu
Abstract. Visual traffic surveillance using computer vision techniques can be noninvasive, automated, and cost effective. Traffic surveillance systems with the ability to detect, count, and classify vehicles can be employed in gathering traffic statistics and achieving better traffic control in intelligent transportation systems. However, vehicle classification poses a difficult problem as vehicles have high intraclass variation and relatively low interclass variation. Five different object recognition techniques are investigated: principal component analysis (PCA)+difference from vehicle space, PCA+difference in vehicle space, PCA+support vector machine, linear discriminant analysis, and constellation-based modeling applied to the problem of vehicle classification. Three of the techniques that performed well were incorporated into a unified traffic surveillance system for online classification of vehicles, which uses tracking results to improve the classification accuracy. To evaluate the accuracy of the system, 31 min of traffic video containing multilane traffic intersection was processed. It was possible to achieve classification accuracy as high as 90.49% while classifying correctly tracked vehicles into four classes: cars, SUVs/vans, pickup trucks, and buses/semis. While processing a video, our system also recorded important traffic parameters such as the appearance, speed, trajectory of a vehicle, etc. This information was later used in a search assistant tool to find interesting traffic events.