Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rafael C. Gonzalez is active.

Publication


Featured researches published by Rafael C. Gonzalez.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

An Iterative Thresholding Algorithm for Image Segmentation

Arnulfo Pérez; Rafael C. Gonzalez

A thresholding technique is developed for segmenting digital images with bimodal reflectance distributions under nonuniform illumination. The algorithm works in a raster format, thus making it an attractive segmentation tool in situations requiring fast data throughput. The theoretical base of the algorithm is a recursive Taylor expansion of a continuously varying threshold tracking function.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Camera geometries for image matching in 3-D machine vision

Nicolas Alvertos; Dragana Brzakovic; Rafael C. Gonzalez

The location of a scene element can be determined from the disparity of two of its depicted entities (each in a different image). Prior to establishing disparity, however, the correspondence problem must be solved. It is shown that for the axial-motion stereo camera model the probability of determining unambiguous correspondence assignments is significantly greater than that for other stereo camera models. However, the mere geometry of the stereo camera system does not provide sufficient information for uniquely identifying correct correspondences. Therefore, additional constraints derived from justifiable assumptions about the scene domain and from the scene radiance model are utilized to reduce the number of potential matches. The measure for establishing the correct correspondence is shown to be a function of the geometrical constraints, scene constraints, and scene radiance model. >


international conference on robotics and automation | 1990

The use of multisensor data for robotic applications

Mongi A. Abidi; Rafael C. Gonzalez

The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system are performed. Specifically, vision-driven techniques were implemented that determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system is also equipped with a force/torque sensor that continuously monitors the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the fluid interchange system and the module interchange system experiments were accomplished autonomously and successfully. >


Pattern Recognition | 1996

Segmentation of range images via data fusion and morphological watersheds

Mohamed Baccar; Linda Ann Gee; Rafael C. Gonzalez; Mongi A. Abidi

Abstract As in 2-D (in two-dimensional) computer vision, segmentation is one of the most important processes in 3-D (three-dimensional) vision. The recent availability of cost-effective range imaging devices has simplified the problem of obtaining 3-D information directly from a scene. Range images are characterized by two principal types of discontinuities: step edges that represent discontinuities in depth and roof (or trough) edges that represent discontinuities in the direction of surface normals. A Gaussian weighted least-squares technique is developed for extracting these two types of edges from range images. Edge extraction is then followed by data fusion to form a single edge map that incorporates discontinuities in both depth and surface normals. Edge maps serve as the input to a segmentation algorithm based on morphological watersheds. It is demonstrated by extensive experimentation, using synthetic and real range image data, that each of these three processes contributes to yield rugged and consistent segmentation results.


International Journal of Parallel Programming | 1976

An algorithm for the inference of tree grammars

Rafael C. Gonzalez; J. J. Edwards; Michael G. Thomason

An algorithm for the inference of tree grammars from sample trees is presented. The procedure, which is based on the properties of self-embedding and regularity, produces a reduced tree grammar capable of generating all the samples used in the inference process as well as other trees similar in structure. The characteristics of the algorithm are illustrated by experimental results.


IEEE Transactions on Computers | 1975

Syntactic Recognition of Imperfectly Specified Patterns

Michael G. Thomason; Rafael C. Gonzalez

The methods developed in this correspondence represent an approach to the problem of handling error-corrupted syntactic pattern strings, an area generally neglected in the numerous techniques for linguistic pattern description and recognition which have been reported. The basic approach consists of applying error transformations to the productions of context-free grammars in order to generate new grammars (also context-free) capable of describing not only the original error-free patterns, but also patterns containing specific types of errors such as deleted, added, and interchanged symbols which arise often in the pattern-scanning process. Theoretical developments are illustrated in the framework of a syntactic recognition system for chromosome structures.


IEEE Computer | 1991

Autonomous robotic inspection and manipulation using multisensor feedback

Mongi A. Abidi; Richard O. Eason; Rafael C. Gonzalez

A six-degree-of-freedom industrial robot to which was added a number of sensors-vision, range, sound, proximity, force/torque, and touch-to enhance its inspection and manipulation capabilities is described. The work falls under the scope of partial autonomy. In teleoperation mode, the human operator prepares the robotic system to perform the desired task. Using its sensory cues, the system maps the workspace and performs its operations in a fully autonomous mode. Finally, the system reports back to the human operator on the success or failure of the task and resumes its teleoperation mode. The feasibility of realistic autonomous robotic inspection and manipulation tasks using multisensory information cues is demonstrated. The focus is on the estimation of the three-dimensional position and orientation of the task panel and the use of other nonvision sensors for valve manipulation. The experiment illustrates the need for multisensory information to accomplish complex, autonomous robotic inspection and manipulation tasks.<<ETX>>


systems man and cybernetics | 1977

Machine Recognition of Abnormal Behavior in Nuclear Reactors

Rafael C. Gonzalez; L. C. Howington

A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and dimensionality reduction capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns and to recognize deviations from these patterns were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were readily detected by the pattern recognition system.


international conference on robotics and automation | 2003

Online motion planning using Laplace potential fields

Diego Álvarez; Juan C. Alvarez; Rafael C. Gonzalez

Robot Navigation is an especially challenging problem when only online sensor information is available. The main problem is to guarantee global properties, such as algorithm convergence or trajectory optimality, based on local information. In this paper we present a new non-heuristic sensor-based planning algorithm, characterized by: 1) it is based in potential functions, allowing to introduce optimality criteria, 2) it is computed incrementally to introduce last sensor readings, and 3) it accounts for robot dynamics. The result is a method suitable for real-time navigation, it is intuitive and easy to understand, and produces smooth and safe trajectories.


systems man and cybernetics | 1990

Developing robotic systems with multiple sensors

Mohan M. Trivedi; Mongi A. Abidi; Richard O. Eason; Rafael C. Gonzalez

A general approach is presented for the integration of vision, range, proximity, and touch sensory data to derive a better estimate of the position and orientation (pose) of an object appearing in the work space. Efficient and robust methods for analyzing vision and range data to derive an interpretation of input images are discussed. Vision information analysis includes a model-based object recognition module and an image-to-world coordinate transformation module to identify the three-dimensional (3-D) coordinates of the recognized objects. The range information processing includes modules for reprocessing, segmentation, and 3-D primitive extraction. The multisensory information integration approach represents sensory information in a sensor-independent form and formulates an optimization problem to find a minimum-error solution to the problem. The capabilities of a multisensor robotic system are demonstrated by performing a number of experiments using an industrial robot equipped with several sensors of differing types. >

Collaboration


Dive into the Rafael C. Gonzalez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Barrero

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge