Curtis Padgett
California Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Curtis Padgett.
IEEE Transactions on Aerospace and Electronic Systems | 1997
Curtis Padgett; Kenneth Kreutz-Delgado
An autonomous star identification algorithm is described that is simple and requires less computer resources than other such algorithms. In simulations using an 8/spl times/8 degree field of view (FOV), the algorithm identifies the correct section of sky on 99.7% of the sensor orientations where spatial accuracy of the imaged star is 1 pixel (56.25 arc seconds) in standard deviation and the apparent brightness deviates by 0.4 units stellar magnitude. This compares very favorably with other algorithms in the literature.
Journal of Guidance Control and Dynamics | 1997
Curtis Padgett; Kenneth Kreutz-Delgado; Suraphol Udomkesmalee
Traditionally, star sensors have used fairly large e elds of view and bright stars to perform tracking and identie cation. Recent trends in space missions have brought out the need for smaller e eld-of-view sensors, capable of performing multiple functions that will reduce spacecraft payload and cost. A number of different strategies are used or have been suggested for identifying star e elds for attitude determination in space. We offer a general classie cation of theexisting techniques and select three representative algorithms formorecomprehensive evaluation. The identie cation rates and performance of the three algorithms are presented over a variety of noise conditions using two different sized onboard catalogs. Substantial differences in algorithm performance are identie ed for various noise levels, which should provide some indication of the suitability of the algorithm for smaller e elds of view.
IEEE Transactions on Aerospace and Electronic Systems | 2000
Daniel S. Clouse; Curtis Padgett
We describe a simple autonomous star identification algorithm which is effective using a narrow field of view (FOV) (2 deg), making the use of a science camera for star identification feasible. This work extends that of Padgett and Kreutz-Delgado (1997) by setting decision thresholds using Bayesian decision theory. Our simulations show that when positional accuracy of imaged stars is 0.5 pixel (standard deviation) and the apparent brightness deviates by 0.8 unit stellar magnitude, the algorithm correctly identifies 96.0% of the sensor orientations, with less than a 0.3% rate of false positives.
international symposium on neural networks | 1998
Ayanna M. Howard; Curtis Padgett; Carl Christian Liebe
Automatic target recognition (ATR) involves processing two-dimensional images for detecting, classifying, and tracking targets. The first stage in ATR is the detection process. This involves discrimination between target and non-target objects in a scene. We discuss a novel approach which addresses the target detection process. This method extracts relevant object features utilizing principal component analysis. These extracted features are then presented to a multi-stage neural network which allows an overall increase in detection rate, while decreasing the false positive alarm rate. We discuss the techniques involved and present some detection results that have been implemented on the multi-stage neural network.
ieee aerospace conference | 2006
Carl Christian Liebe; Curtis Padgett; Jacob Chapsky; Daniel W. Wilson; Kenneth Brown; Sergei Jerebets; Hannah Goldberg; Jeffrey Schroeder
At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation
Proceedings of SPIE | 2013
Adrian Stoica; Ana Matran-Fernandez; Dimitrios Andreou; Riccardo Poli; Caterina Cinel; Yumi Iwashita; Curtis Padgett
In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPAs Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.
Automatic Target Recognition VII | 1997
Suraphol Udomkesmalee; Anilkumar P. Thakoor; Curtis Padgett; Taher Daud; Wai-Chi Fang; Steven C. Suddarth
VIGILANTE consists of two major components: (1) the viewing image/gimballed instrumentation laboratory (VIGIL) -- advanced infrared, visible, and ultraviolet sensors with appropriate optics and camera electronics; (2) the analog neural three- dimensional processing experiment (ANTE) -- a massively parallel, neural network-based, high-speed processor. The powerful combination of VIGIL and ANTE will provide real-time target recognition/tracking capability suitable for ballistic missile defense organization (BMDO) applications as well as a host of other civil and military uses. In this paper, we describe VIGILANTE and its application to typical automatic target recognition (ATR) applications (e.g., aircraft/missile detection, classification, and tracking), this includes a discussion of the VIGILANTE architecture with its unusual blend of experimental 3D electronic circuitry, custom design and commercial parallel processing components, as well as VIGILANTEs ability to handle a wide variety of algorithms which make extensive use of convolutions and neural networks. Our paper also presents examples and numerical results.
international symposium on visual computing | 2009
Clark F. Olson; Adnan Ansar; Curtis Padgett
We describe techniques for registering images from sequences of aerial images captured of the same terrain on different days. The techniques are robust to changes in weather, including variable lighting conditions, shadows, and sparse intervening clouds. The primary underlying technique is robust feature matching between images, which is performed using both robust template matching and SIFT-like feature matching. Outlier rejection is performed in multiple stages to remove incorrect matches. With the remaining matches, we can compute homographies between images or use non-linear optimization to update the external camera parameters. We give results on real aerial image sequences.
ieee aerospace conference | 2004
Carl Christian Liebe; Curtis Padgett; Johnny Chang
This paper describes a method of remote sensing the 3-dimensional structure of the proximity utilizing a laser, a holographic grating, and a single regular CCD camera. Basically, the laser beam is split by a holographic grating to form a regular spaced grid of laser beams that are projected into the field of view of a CCD camera. The laser source and the CCD camera are physically separated forming the base of a triangle. The exit angle of the laser beam and the angle measured by the camera where the beam intersects a surface is a function of the distance. These two angles and the distance between the source and the camera allow calculation of the range to the projected spot using triangulation. This paper describes an experimental proof of concept and an empirical calibration of the system. Encouraging results are achieved and presented. Also, an application in which this system potentially would be used for a Mars landing is discussed.
Applications and science of artificial neural networks. Conference | 1997
Curtis Padgett; Michael F. Zhu; Steven C. Suddarth
VIGILANTE is an automated recognition and tracking system that closely integrates a sensing platform with a very large processing capability (over 2 TeraOPS). The architecture currently consists of an optical bench with multiple sensors, a large parallel analog pre-processor, and a digital 512 processor, parallel machine. Preliminary results on target detection and orientation are presented for an algorithm that is suitable for the VIGILANTE architecture. The technique makes use of eigenvectors calculated from image blocks (size 32 X 32) drawn from video sequences containing rocket targets. The eigenvectors are used to reduce the dimensionality of frame-lets (size 32 X 32) from the larger sensor images. These frame-lets are projected on to the eigenvectors and the resultant values are then used as an input pattern to a feed forward neural network classifier. A description and evaluation of this algorithm (including precision limitation) with respect to VIGILANTE is provided. Experiments using this technique have generated near 99target and non-target images and close to 97% identification of the rocket type.