Philip Saponaro
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip Saponaro.
international parallel and distributed processing symposium | 2010
Omar Padron; Philip Saponaro; Sandeep Patel
The advent of general purpose graphics processing units (GPGPUs) brings about a whole new platform for running numerically intensive applications at high speeds. Their multi-core architectures enable large degrees of parallelism via a massively multi-threaded environment. Molecular dynamics (MD) simulations are particularly well-suited for GPUs because their computations are easily parallelizable. Significant performance improvements are observed when single precision floating-point arithmetic is used. However, this performance comes at the cost of accuracy: it is widely acknowledged that constant-energy (NVE) MD simulations accumulate errors as the simulation proceeds due to the inherent errors associated with integrators used for propagating the coordinates. A consequence of this numerical integration is the drift of potential energy as the simulation proceeds. Double precision arithmetic partially corrects this drifting, but is significantly slower than single precision, comparable to CPU performance. To address this problem, we extend the approaches of previous literature to improve numerical reproducibility and stability in MD simulations, while assuring efficiency and performance comparable to that when using the GPU hardware implementation of single precision arithmetic. We present development of a library of mathematical functions that use fast and efficient algorithms to fix the error produced by the equivalent operations performed by GPU. We successfully validate the library with a suite of synthetic codes emulating the MD behavior on GPUs.
Neurocomputing | 2016
Guoyu Lu; Yan Yan; Li Ren; Philip Saponaro; Nicu Sebe; Chandra Kambhamettu
Indoor localization is one of the key problems in robotics research. Most current localization systems use cellular base stations and Wifi signals, whose localization accuracy is largely dependent on the signal strength and is sensitive to environmental changes. With the development of camera-based technologies, image-based localization may be employed in an indoor environment where the GPS signal is weak. Most of the existing image-based localization systems are based on color images captured by cameras, but this is only feasible in environments with adequate lighting conditions. In this paper, we introduce an image-based localization system based on thermal imaging to make the system independent of light sources, which are especially useful during emergencies such as a sudden power outage in a building. As thermal images are not obtained as easily as color images, we apply active transfer learning to enrich the thermal image classification learning, where normal RGB images are treated as the source domain, and thermal images are the target domain. The application of active transfer learning avoids random target training sample selection and chooses the most informative samples in the learning process. Through the proposed active transfer learning, the query thermal images can be accurately used to indicate the location. Experiments show that our system can be efficiently deployed to perform indoor localization in a dark environment.
international conference on image processing | 2015
Philip Saponaro; Scott Sorensen; Stephen Rhein; Chandra Kambhamettu
Calibration of stereo cameras is important for accurate 3D reconstruction. For standard color cameras there are many available tools and algorithms for accurate calibration, such as detecting corners of chessboard patterns on planar calibration boards. When viewed in thermal imagery, these chessboard patterns are difficult to detect due to uniform temperature between the white and black squares. Previous techniques involve creating a custom calibration board using multiple materials. We propose improvements to a method that does not require a custom calibration board. Our method is made more reliable by using an iterative pre-processing technique to enhance contrast and a ceramic tile backing to retain heat longer. We present results which show our calibration board retains heat to reliably detect corners for over 10 minutes; our method performs well in real calibration trials.
international conference on image processing | 2014
Philip Saponaro; Scott Sorensen; Stephen Rhein; Andrew R. Mahoney; Chandra Kambhamettu
Techniques based on the well-studied approaches of structure from motion and bundle adjustment are very robust for scenes with texture. In scenes with little texture information these approaches can fail. Shape from shading determines the shape of an object up to a scale from a single image, and performs better than structure from motion methods in textureless regions. We propose using Gradient Constrained Interpolation to estimate a dense point cloud where holes are caused by regions of low texture during structure from motion reconstruction. Our technique is demonstrated to show good results in both synthetic and real data and outperforms methods which do not use image information.
computer vision and pattern recognition | 2015
Philip Saponaro; Scott Sorensen; Abhishek Kolagunda; Chandra Kambhamettu
Material classification is an important area of research in computer vision. Typical algorithms use color and texture information for classification, but there are problems due to varying lighting conditions and diversity of colors in a single material class. In this work we study the use of long wave infrared (i.e. thermal) imagery for material classification. Thermal imagery has the benefit of relative invariance to color changes, invariance to lighting conditions, and can even work in the dark. We collect a database of 21 different material classes with both color and thermal imagery. We develop a set of features that describe water permeation and heating/cooling properties, and test several variations on these methods to obtain our final classifier. The results show that the proposed method outperforms typical color and texture features, and when combined with color information, the results are improved further.
international conference on image processing | 2015
Scott Sorensen; Abhishek Kolagunda; Philip Saponaro; Chandra Kambhamettu
Underwater objects behind a refractive surface pose problems for traditional 3D reconstruction techniques. Scenes where underwater objects are visible from the surface are commonplace, however the refraction of light causes 3D points in these scenes to project non-linearly. Refractive Stereo Ray Tracing allows for accurate reconstruction by modeling the refraction of light. Our approach uses techniques from ray tracing to compute the 3D position of points behind a refractive surface. This technique aims to reconstruct underwater structures in situations where access to the water is dangerous or cost prohibitive. Experimental results in real and synthetic scenes show this technique effectively handles refraction.
british machine vision conference | 2015
Scott Sorensen; Philip Saponaro; Stephen Rhein; Chandra Kambhamettu
Reflective and specular surfaces are problematic for traditional reconstruction techniques. Light projects non-linearly in scenes with these surfaces, and existing techniques to model this are poorly suited for real world applications. Accurately modeling the reflective surface is difficult without complete knowledge of the scene. To overcome this problem, we propose using different modalities of stereo vision to capture both the reflecting surface and the reflected scene. Using a four camera system consisting of a pair of visible wavelength cameras and a pair of long wave infrared cameras, we accurately reconstruct the reflective surface and ray trace reflected correspondences in the complementary modality. This approach allows for 3D reconstruction in the presence of a reflection, and does not require complete knowledge of the scene.
computer vision and pattern recognition | 2013
Philip Saponaro; Chandra Kambhamettu
In this paper, we address the problem of auto calibration of cameras which can rotate freely and change focal length, and we present an algorithm for finding the intrinsic parameters using only two images. We utilize orientation sensors found on many modern smart phones to help decompose the infinite homography into two equivalent upper triangular matrices based only on the intrinsic parameters. We account for small translations between views by calculating the homography based on correspondences on objects that are far away from the camera. We show results based on both real and synthetic data, and quantify the tolerance of our system to small translations and errors in the orientation sensors. Our results are comparable to other recent auto-calibration work while requiring only two images and being tolerant to some translation.
international symposium on visual computing | 2014
Philip Saponaro; Kelly D. Sherbondy; Chandra Kambhamettu
Concealed or buried improvised explosive devices (IEDs) are a major cause of fatalities for both civilians and soldiers. For detecting hidden targets, many technologies have been considered such as ground penetrating radar (GPR), infrared cameras, and even visible wavelength cameras. In this work, we propose fusing visible and infrared sensors for automatic detection of shallowly buried (< 10cm) or above ground targets. We use Gaussian Mixture Models (GMMs) to create a base model of the temperature and color variation of the background scene and dynamically update our models for new scenes. Anomalous temperatures and colors are identified using the GMM components. Fusion is performed at the pixel level, confidence map level, and decision level for comparison. Data was collected with a Xenics Gobi 480 long wave infrared camera and a Canon Powershot A1200 visible wavelength camera with metal targets placed in various concealed configurations. The observed results show that infrared can detect shallowly buried targets and targets above ground ”out in the open” effectively, but cannot detect metal targets nearby bushes. Visible cameras, on the other hand, can detect the metal targets in the bushes effectively. Confidence map and decision level fusion led to the best results when there was a mix of buried targets and targets hidden in bushes.
Radar Sensor Technology XXII | 2018
Brian R. Phelan; Kenneth I. Ranney; Canh Ly; Philip Saponaro; Kelly D. Sherbondy; Ram M. Narayanan
The US Army Research Laboratory (ARL) has recently developed the Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) radar system during its ongoing research to provide ground vehicular standoff detection and classification of obscured and/or buried explosive hazards. The system is a stepped-frequency radar (SFR) that can be reconfigured to omit operation within specific sub-bands of its 1700 MHz operating band (300 MHz to 2000 MHz). It employs two transmit antennas and an array of 16 receive antennas; the antenna types are quad-ridged horn and Vivaldi, respectively. The system is vehicle-mounted and can be interchanged between forward- or side-looking configurations. In order to assess and evaluate the performance of the SAFIRE radar system in a realistic deployment scenario, ARL has collected SAFIRE data using militarily-relevant threats at an arid US Army test site. This paper presents an examination of radar imagery from these data collection campaigns. A discussion on the image formation techniques is presented and recently processed radar imagery is provided. A summary of the radars performance is presented and recommendations for further improvements are discussed.