Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger D. Eastman is active.

Publication


Featured researches published by Roger D. Eastman.


Ophthalmology | 1993

Confocal laser scanning ophthalmoscope. Reproducibility of optic nerve head topographic measurements with the confocal laser scanning ophthalmoscope.

George A. Cioffi; Alan L. Robin; Roger D. Eastman; Howard F. Perell; Faith A. Sarfarazi; Shalom E. Kelman

BACKGROUND Glaucoma is an optic neuropathy in which changes in the appearance of both the optic nerve head and the surrounding tissues are important in both diagnosing its presence and progression. Accurate methods to objectively document the appearance of the optic nerve are necessary. The confocal laser scanning ophthalmoscope (Zeiss) is a new prototype instrument that may have the capability to accurately perform this function. METHODS The authors performed a prospective pilot study evaluating the ability of the confocal laser scanning ophthalmoscope to reproduce three-dimensional optic nerve images. Each retinal image contained 600,000 bytes of information. Thirty discrete images of the right optic nerves of 19 visually normal volunteers were obtained. Depth measurements were compared from the same 100 x 100 micron areas (neighborhoods). RESULTS Image comparisons found the variability of depth measurements for the entire image were within 102 microns (95% confidence interval). Sixty percent of the depth measurements were reproducible within 100 microns. Variability of the depth measurements was greatest where the neuroretinal rim sloped at the edge of the optic cup and lowest in the peripapillary area. CONCLUSION The confocal laser scanning ophthalmoscope has the potential to be a safe, rapid, and reproducible method of imaging ocular structures.


Archive | 2011

Image Registration for Remote Sensing

Jacqueline Le Moigne; Nathan S. Netanyahu; Roger D. Eastman

Foreword Jon A. Benediktsson Part I. The Importance of Image Registration for Remote Sensing: 1. Introduction Jacqueline Le Moigne, Nathan S. Netanyahu and Roger D. Eastman 2. Influence of image registration on validation efforts Bin Tan and Curtis E. Woodcock 3. Survey of image registration methods Roger D. Eastman, Nathan S. Netanyahu and Jacqueline Le Moigne Part II. Similarity Metrics for Image Registration: 4. Fast correlation and phase correlation Harold S. Stone 5. Matched filtering techniques Qin-Sheng Chen 6. Image registration using mutual information Arlene A. Cole-Rhodes and Pramod K. Varshney Part III. Feature Matching and Strategies for Image Registration: 7. Registration of multiview images A. Ardeshir Goshtasby 8. New approaches to robust, point-based image registration David M. Mount, Nathan S. Netanyahu and San Ratanasanya 9. Condition theory for image registration and post-registration error estimation Charles S. Kenney, B. S. Manjunath, Marco Zuliani and Kaushal Solanki 10. Feature-based image to image registration Venu M. Govindu and Rama Chellappa 11. On the use of wavelets for image registration Jacqueline Le Moigne, Ilya Zavorin and Harold S. Stone 12. Gradient descent approaches to image registration Arlene A. Cole-Rhodes and Roger D. Eastman 13. Bounding the performance of image registration Min Xu and Parmod K. Varshney Part IV. Applications and Operational Systems: 14. Multi-temporal and multi-sensor image registration Jacqueline Le Moigne, Arlene A. Cole-Rhodes, Roger D. Eastman, Nathan S. Netanyahu, Harold S. Stone, Ilya Zavorin and Jeffrey T. Morisette 15. Georegistration of meteorological images James L. Carr 16. Challenges, solutions, and applications of accurate multi-angle image registration: lessons learned from MISR Veljko M. Jovanovic, David J. Diner and Roger Davies 17. Automated AVHRR image navigation William J. Emery, R. Ian Crocker and Daniel G. Baldwin 18. Landsat image geocorrection and registration James C. Storey 19. Automatic and precise orthorectification of SPOT images Simon Baillarin, Aurelie Bouillon and Marc Bernard 20. Geometry of the VEGETATION sensor Sylvia Sylvander 21. Accurate MODIS global geolocation through automated ground control image matching Robert E. Wolfe and Masahiro Nishihama 22. SeaWIFS operational geolocation assessment system Frederick S. Patt Part V. Conclusion: 23. Concluding remarks Jacqueline Le Moigne, Nathan S. Netanyahu and Roger D. Eastman Glossary Index.


performance metrics for intelligent systems | 2012

An overview of robot-sensor calibration methods for evaluation of perception systems

Mili Shah; Roger D. Eastman; Tsai Hong Hong

In this paper, an overview of methods that solve the robotsensor calibration problem of the forms AX = XB and AX = YB is given. Each form will be split into three solutions: separable closed-form solutions, simultaneous closed-form solutions, and iterative solutions. The advantages and disadvantages of each of the solutions in the case of evaluation of perception systems will also be discussed.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

Super-resolution enhancement of flash LADAR range data

Gavin Rosenbush; Tsai Hong; Roger D. Eastman

Flash LADAR systems are becoming increasingly popular for robotics applications. However, they generally provide a low-resolution range image because of the limited number of pixels available on the focal plane array. In this paper, the application of image super-resolution algorithms to improve the resolution of range data is examined. Super-resolution algorithms are compared for their use on range data and the frequency-domain method is selected. Four low-resolution range images which are slightly shifted and rotated from the reference image are registered using Fourier transform properties and the super-resolution image is built using non-uniform interpolation. Image super-resolution algorithms are typically rated subjectively based on the perceived visual quality of their results. In this work, quantitative methods for evaluating the performance of these algorithms on range data are developed. Edge detection in the range data is used as a benchmark of the data improvement provided by super-resolution. The results show that super-resolution of range data provides the same advantage as image super-resolution, namely increased image fidelity.


computer vision and pattern recognition | 2007

Research issues in image registration for remote sensing

Roger D. Eastman; J. Le Moigne; Nathan S. Netanyahu

Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.


performance metrics for intelligent systems | 2008

Dynamic 6DOF metrology for evaluating a visual servoing system

Tommy Chang; Tsai Hong; Michael O. Shneier; German Holguin; Johnny Park; Roger D. Eastman

In this paper we demonstrate the use of a dynamic, six-degree-of-freedom (6DOF) laser tracker to empirically evaluate the performance of a real-time visual servoing implementation, with the objective of establishing a general method for evaluating real-time 6DOF dimensional measurements. The laser tracker provides highly accurate ground truth reference measurements of position and orientation of an object under motion, and can be used as an objective standard for calibration and evaluation of visual servoing and robot control algorithms. The real-time visual servoing implementation used in this study was developed at the Purdue Robot Vision Lab with a subsumptive, hierarchical, and distributed vision-based architecture. Data were taken simultaneously from the laser tracker and visual servoing implementation, enabling comparison of the data streams.


international conference on robotics and automation | 2007

Training and optimization of operating parameters for flash LADAR cameras

Michael Price; Jacqueline Kenney; Roger D. Eastman; Tsai Hong

Flash LADAR cameras based on continuous-wave, time-of-flight range measurement deliver fast 3D imaging for robot applications including mapping, localization, obstacle detection and object recognition. The accuracy of the range values produced depends on characteristics of the scene as well as dynamically adjustable operating parameters of the cameras. In order to optimally set these parameters during camera operation we have devised and implemented an optimization algorithm in a modular, extensible architecture for real-time applications including robot control. The approach uses two components: offline nonlinear optimization to minimize the range error for a training set of simple scenes followed by an online, real-time algorithm to reference the training data and set camera parameters. We quantify the effectiveness of our approach and highlight topics of interest for future research.


international conference on acoustics, speech, and signal processing | 2006

Image Registration and Fusion Studies for the Integration of Multiple Remote Sensing Data

J. Le Moigne; Arlene Cole-Rhodes; Roger D. Eastman; Peyush Jain; A. Joshua; Nargess Memarsadeghi; David M. Mount; Nathan S. Netanyahu; J. Morisette; E. Uko-Ozoro

The future of remote sensing will see the development of spacecraft formations, and with this development will come a number of complex challenges such as maintaining precise relative position and specified attitudes. At the same time, there will be increasing needs to understand planetary system processes and build accurate prediction models. One essential technology to accomplish these goals is the integration of multiple source data. For this integration, image registration and fusion represent the first steps and need to be performed with very high accuracy. In this paper, we describe studies performed in both image registration and fusion, including a modular framework that was built to describe registration algorithms, a Web-based image registration toolbox, and the comparison of several image fusion techniques using data from the EO-1/ALI and Hyperion sensors


Image and signal processing for remote sensing. Conference | 2002

Multi-Sensor Registration of Earth Remotely Sensed Imagery

Jacqueline Le Moigne; Arlene Cole-Rhodes; Roger D. Eastman; Kisha Johnson; J. Morisette; Nathan S. Netanyahu; Harold S. Stone; Ilya Zavorin

Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30 m), MODIS (500 m), and SeaWIFS (1000m).


international conference on information fusion | 2002

Multiple sensor image registration, image fusion and dimension reduction of Earth science imagery

J. Le Moigne; Arlene Cole-Rhodes; Roger D. Eastman; Tarek A. El-Ghazawi; Kisha Johnson; S. Knewpijit; Nadine T. Laporte; J. Morisette; Nathan S. Netanyahu; Harold S. Stone; Ilya Zavorin

The goal of our project is to develop and evaluate image analysis methodologies for use on the ground or on-board spacecraft particularly spacecraft constellations. Our focus is on developing methods to perform automatic registration and fusion of multisensor data representing multiple spatial, spectral and temporal resolutions, as well as dimension reduction of hyperspectral data. Feature extraction methods such as wavelet decomposition, edge detection and mutual information are combined with feature matching methods such as cross-correlation, optimization, and statistically robust techniques to perform image registration. The approach to image fusion is application-based and involves wavelet decomposition, dimension reduction, and classification methods. Dimension reduction is approached through novel methods based on principal component analysis and wavelet decomposition, and implemented on Beowulf-type parallel architectures. Registration algorithms are tested and compared on several multi-sensor datasets, including one of the EOS Core Sites, the Konza Prairie in Kansas, utilizing four different sensors: IKONOS, Landsat-7/ETM+, MODIS, and SeaWIFS. Fusion methods are tested using Landsat, MODIS and SAR or JERS data. Dimension reduction is demonstrated on A VIRIS hyperspectral data.

Collaboration


Dive into the Roger D. Eastman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tsai Hong Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Morisette

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Harold S. Stone

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

J. Le Moigne

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Tommy Chang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ilya Zavorin

Goddard Space Flight Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge