Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark W. Powell is active.

Publication


Featured researches published by Mark W. Powell.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

A simple strategy for calibrating the geometry of light sources

Mark W. Powell; Sudeep Sarkar; Dmitry B. Goldgof

We present a methodology for calibrating multiple light source locations in 3D from images. The procedure involves the use of a novel calibration object that consists of three spheres at known relative positions. The process uses intensity images to find the positions of the light sources. We conducted experiments to locate light sources in 51 different positions in a laboratory setting. Our data shows that the vector from a point in the scene to a light source can be measured to within 2.7/spl plusmn/4/spl deg/ at /spl alpha/=.05 (6 percent relative) of its true direction and within 0.13/spl plusmn/.02 m at /spl alpha/=.05 (9 percent relative) of its true magnitude compared to empirically measured ground truth. Finally, we demonstrate how light source information is used for color correction.


systems man and cybernetics | 2004

Automated performance evaluation of range image segmentation algorithms

Jaesik Min; Mark W. Powell; Kevin W. Bowyer

Previous performance evaluation of range image segmentation algorithms has depended on manual tuning of algorithm parameters, and has lacked a basis for a test of the significance of differences between algorithms. We present an automated framework for evaluating the performance of range image segmentation algorithms. Automated tuning of algorithm parameters in this framework results in performance as good as that previously obtained with careful manual tuning by the algorithm developers. Use of multiple training and test sets of images provides the basis for a test of the significance of performance differences between algorithms. The framework implementation includes range images, ground truth overlays, program source code, and shell scripts. This framework should make it possible to objectively and reliably compare the performance of range image segmentation algorithms; allow informed experimental feedback for the design of improved segmentation algorithms. The framework is demonstrated using range images, but in principle it could be used to evaluate region segmentation algorithms for any type of image.


computational intelligence in robotics and automation | 1999

Cooperative navigation of micro-rovers using color segmentation

Jeff Hyams; Mark W. Powell; Robin R. Murphy

This paper addresses position estimation of a micro-rover mobile robot (called the “daughter”) as a larger robot (the “mother”) tracks it through large spaces with unstructured lighting. Position estimation is necessary for localization, where the mother extracts the relative position of the daughter for mapping purposes, and for cooperative navigation, where the mother controls the daughter in real-time. The approach taken is to employ the Spherical Coordinate Transform color segmenter developed for medical applications as a low computational and hardware cost solution. Data was collected from 50 images taken in five types of lighting: fluorescent, tungsten, daylight lamp, natural daylight indoors and outdoors. The results show that average pixel error was 1.5, with an average error in distance estimation of 6.3 cm. The size of the error did not vary greatly with the type of lighting. The segmentation and distance tracking have also been implemented as a real-time tracking system. Using this system, the mother robot is able to autonomously control the micro-rover and display a map of the daughters path in real-time using only a Pentium class processor and no specialized hardware.


systems man and cybernetics | 2004

A methodology for extracting objective color from images

Mark W. Powell; Sudeep Sarkar; Dmitry B. Goldgof; Krassimir Ivanov

We present a methodology for correcting color images taken in practical indoor environments, such as laboratories, factories, and studios, that explicitly models illuminant location, surface reflectance and geometry, and camera responsivity. We explicitly model surfaces by taking our color images with corresponding registered three-dimensional (3-D) range images, which provide surface orientation and location information for every point in the scene. We automatically detect regions where color correction should not be applied, such as specularities, coarse texture regions, and jump edges. This correction results in objective color measures of the imaged surfaces. This kind of integrated, comprehensive system of color correction has not existed until now. i.e., it is the first of its kind in computer vision. We demonstrate results of applying this methodology to real images for applications in photorealistic rerendering, skin lesion detection, burn scar color measurement, and general color image enhancement. We also have tested the method under different lighting configurations and with three different range scanners.


Journal of Field Robotics | 2007

TRESSA: Teamed robots for exploration and science on steep areas

Terry Huntsberger; Ashley Stroupe; Hrand Aghazarian; Michael Garrett; Paulo Younse; Mark W. Powell

Long-duration robotic missions on lunar and planetary surfaces (for example, the Mars Exploration Rovers have operated continuously on the Martian surface for close to 3 years) provide the opportunity to acquire scientifically interesting information from a diverse set of surface and subsurface sites and to explore multiple sites in greater detail. Exploring a wide range of terrain types, including plains, cliffs, sand dunes, and lava tubes, requires the development of robotic systems with mobility enhanced beyond that which is currently fielded. These systems include single as well as teams of robots. TRESSA (Teamed Robots for Exploration and Science on Steep Areas) is a closely coupled three-robot team developed at the Jet Propulsion Laboratory (JPL) that previously demonstrated the ability to drive on soil-covered slopes up to 70 deg. In this paper, we present results from field demonstrations of the TRESSA system in even more challenging terrain: rough rocky slopes of up to 85 deg. In addition, the integration of a robotic arm and instrument suite has allowed TRESSA to demonstrate semi-autonomous science investigation of the cliffs and science sample collection. TRESSA successfully traversed cliffs and collected samples at three Mars analog sites in Svalbard, Norway as part of a recent geological and astrobiological field investigation called AMASE: Arctic Mars Analog Svalbard Expedition under the NASA ASTEP (Astrobiology Science and Technology for Exploring Planets) program.


ieee aerospace conference | 2005

Target tracking, approach, and camera handoff for automated instrument placement

Max Bajracharya; Antonio Diaz-Calderon; Matthew Robinson; Mark W. Powell

This paper describes the target designation, tracking, approach, and camera handoff technologies required to achieve accurate, single-command autonomous instrument placement for a planetary rover. It focuses on robust tracking integrated with obstacle avoidance during the approach phase, and image-based camera handoff to allow vision-based instrument placement. It also provides initial results from a complete system combining these technologies with rover base placement to maximize arm manipulability and image-based instrument placement.


international conference on pattern recognition | 2000

Progress in automated evaluation of curved surface range image segmentation

Jaesik Min; Mark W. Powell; Kevin W. Bowyer

We have developed an automated framework for performance evaluation of curved-surface range image segmentation algorithms. Enhancements over our previous work include automated training of parameter values, correcting the artifact problem in K/sup 2/T scanner images, and acquisition of images of the same scenes from different range scanners. The image dataset includes planar, spherical, cylindrical, conical, and toroidal surfaces. We have evaluated the automated parameter tuning technique and found that it compares favorably with manual parameter tuning. We present initial results from comparing curved-surface segmenters by Besl and Jain (1988) and by Jiang and Bunke (1998).


workshop on applications of computer vision | 2000

Automated performance evaluation of range image segmentation

Jaesik Min; Mark W. Powell; Kevin W. Bowyer

We have developed an automated framework for objectively evaluating the performance of region segmentation algorithms. This framework is demonstrated with range image data sets, but is applicable to any type of imagery. Parameters of the segmentation algorithm are tuned using training images. Images and source code for the training process care publicly available. The trained parameters are then used to evaluate the algorithm on a (sequestered) test set. The primary performance metric is the average number of correctly segmented regions. Statistical tests are used to determine the significance of performance improvement over a baseline algorithm.


computer vision and pattern recognition | 2000

Calibration of light sources

Mark W. Powell; Sudeep Sarkar; Dmitry B. Goldgof

We present a methodology for calibrating multiple light source locations in 3D from images. The procedure involves the use of a novel calibration object that consists of either 2 or 3 spheres at known relative positions. There are two variants of the process: one which uses range and intensity imaging to find the positions of the light sources, and one that uses only the intensity image to locate the illuminants. We conducted experiments using both variations of the technique to locate light sources in 51 different positions in a laboratory setting. Our data shows that the vector from a point in the scene to a light source can be measured to within 3/spl deg/(6%) of its tote direction and within 0.13 m (9%) of its true magnitude compared to empirically measured ground truth. Finally, we demonstrate how light source information can be applied to burn scar color correction and color segmentation.


ieee aerospace conference | 2005

Distributed operations for the Mars Exploration Rover Mission with the science activity planner

Justin V. Wick; John L. Callas; Jeffrey S. Norris; Mark W. Powell; Marsette Vona

The unprecedented endurance of both the Spirit and Opportunity rovers during the Mars Exploration Rover Mission (MER) brought with it many unexpected challenges. Scientists, many of whom had planned on staying at the Jet Propulsion Laboratory (JPL) in Pasadena, CA for 90 days, were eager to return to their families and home institutions. This created a need for the rapid conversion of a mission-planning tool, the science activity planner (SAP), from a centralized application usable only within JPL, to a distributed system capable of allowing scientists to continue collaborating from locations around the world. Rather than changing SAP itself, the rapid conversion was facilitated by a collection of software utilities that emulated the internal JPL software environment and provided efficient, automated information propagation. During this process many lessons were learned about scientific collaboration in a concurrent environment, use of existing server-client software in rapid systems development, and the effect of system latency on end-user usage patterns. Switching to a distributed mode of operations also saved a considerable amount of money, and increased the number of specialists able to actively contribute to mission research. Long-term planetary exploration missions of the future will build upon the distributed operations model used by MER

Collaboration


Dive into the Mark W. Powell's collaboration.

Top Co-Authors

Avatar

Dmitry B. Goldgof

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Sudeep Sarkar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jaesik Min

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashley Stroupe

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paulo Younse

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge