Rafael M. Inigo
University of Virginia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rafael M. Inigo.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1984
Rafael M. Inigo; Eugene S. McVey; B. J. Berger; M. J. Wirtz
Research on the semiautonomous operation of mobile robots in typical pathways is described. The image of the pathway will consist of two nearly vertical lines bounding a region with little texture (the pathway) after correction for perspective. In order to identify pathway boundaries, regions in the image space are examined using an edge detection algorithm, edges between regions are determined by the algorithm, and those corresponding to straight or nearly straight lines with large slope (path boundaries) are identified by means of the Hough transform. Once the path boundaries are identified, the horizontal distance from camera to road edge is determined. Next, a method to detect cubics in the roadway (i.e., obstacles) is presented. The region of interest in the roadway (from the camera to some predetermined distance in front of it) is known from the path boundary algorithm. The interior of this region is examined for edges. If edges are detected, it means that obstacles or shadows are present. A method to separate obstacles from shadows using stere vision is then presented.
IEEE Transactions on Industrial Electronics | 1985
Rafael M. Inigo
The large increase in traffic density during the last 25 years in the US, Europe, and Japan has required the automation of some functions of traffic control and monitoring. Buried magnetic loops are widely used for this purpose, but there are functions such as incident detection, vehicle tracking, etc., which are not implementable with them. In addition, magnetic loops are not flexible. Image processing represents a potentially much more powerful tool for this application. Its main disadvantage is the high initial cost. Research in the area of machine vision for traffic monitoring and control is being actively pursued in several countries. This paper surveys current efforts.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1986
E. S. Mc Vey; Keith C. Drake; Rafael M. Inigo
A continuous, straight-edged line is used for the visual navigation of an autonomous mobile robot in a factor environment. This line, which resides on the floor and contrasts with background, may also be used to determine range information. Two methods are developed for determining the range of an object in the sensors field of view. The effects of various error conditions in the system geometry on each ranging method are determined. Equations are derived which yield the percent error in calculating ranges given estimates of these error conditions. Numerical examples using typical sensor parameters are given.
ieee industry applications society annual meeting | 1989
Kevin E. Brown; Rafael M. Inigo; Barry W. Johnson
The authors present the design, implementation, and testing of an adaptable, optimal controller for the electric wheelchair. Optimal control theory and pattern recognition techniques are combined to design a variable-structure controller (VSC) using a modified proportional, integral, and derivative (PID) control method. The controller is adaptable in the sense that multiple sets of control coefficients are used, with each set being optimized for a specific range of wheelchair load parameters. The appropriate set of control coefficients can be automatically selected on the basis of load parameter estimates to provide a self-adaptive controller, or manually selected prior to installation to tailor the chair to a particular user. The manually adaptable controller is implemented in a microprocessor-based system, installed on an electric wheelchair, and its performance is experimentally verified.<<ETX>>
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1985
Keith C. Drake; Eugene S. McVey; Rafael M. Inigo
The use of a contrasting line for the visual navigation of autonomous mobile robots in a factory environment is developed. Minimum and maximum linewidths are determined analytically by considering sensor geometry, field of view, and error conditions present in the system. The effects of these error conditions on the width of the line, as seen in the image plane, determines the optimal linewidth. Numerical examples using typical sensor parameters are given.
international conference on robotics and automation | 1987
Keith C. Drake; Eugene S. McVey; Rafael M. Inigo
Experimental results are presented that support theory published in the literature concerning the use of a navigation line for the guidance of mobile robots. A method for the determination of a robots position is developed. A specialized edge operator, which aids in the segmentation of a navigation line from an image of a robots environment, is given. Use of this specialized edge operator in conjunction with the Hough transform is also presented. These methods are used to verify the analytical results given previously. Comparisons are made between experimental data and expected results as a function of various system parameters. Real-time implementation of these methods is considered.
IEEE Transactions on Knowledge and Data Engineering | 1992
Jay I. Minnix; Eugene S. McVey; Rafael M. Inigo
An artificial neural network that self-organizes to recognize various images presented as a training set is described. One application of the network uses multiple functionally disjoint stages to provide pattern recognition that is invariant to translations of the object in the image plane. The general form of the network uses three stages that perform the functionally disjoint tasks of preprocessing, invariance, and recognition. The preprocessing stage is a single layer of processing elements that performs dynamic thresholding and intensity scaling. The invariance stage is a multilayered connectionist implementation of a modified Walsh-Hadamard transform used for generating an invariant representation of the image. The recognition stage is a multilayered self-organizing neural network that learns to recognize the representation of the input image generated by the invariance stage. The network can successfully self-organize to recognize objects without regard to the location of the object in the image field and has some resistance to noise and distortions. >
international conference on pattern recognition | 1990
Gan Wang; Rafael M. Inigo; Eugene S. McVey
The authors present a pipeline method for detection and tracking of pixel-sized moving targets with unknown trajectories from a time sequence of highly noisy images. The pipeline target detection algorithm uses the temporal continuity of the smooth trajectories of moving targets and successfully detects and simultaneously tracks all the target trajectories by mapping them from the image sequence onto a single target frame. The pipeline method overcomes the constraint of a straight line trajectory that most other algorithms require for similar tasks. The algorithm is a complete parallel distributed processing type process, and therefore is highly time-efficient-ideal for real-time detection and tracking of arbitrary target trajectories in high-noise environments.<<ETX>>
IEEE Transactions on Industrial Electronics | 1987
Rafael M. Inigo; Robert M. Kossey
Robot force sensing and force control have been researched for several years, producing a variety of sensing methods and control strategies. Most of the published work has been theoretical in nature, with little emphasis on the practical problems encountered in the implementation of a system. This paper describes the interfacing of a 6-axis wrist force sensor to a manipulator and the design and implementation of a control system incorporating the sensor as a feedback device (in addition to existing position control).
IEEE Transactions on Knowledge and Data Engineering | 1992
Glenn S. Himes; Rafael M. Inigo
The use of a neocognitron in an automatic target recognition (ATR) system is described. An image is acquired, edge detected, segmented, and centered on a log-spiral grid using subsystems not discussed in the paper. A conformal transformation is used to map the log-spiral grid to a computation plane in which rotations and scalings are transformed to displacements along the vertical and horizontal axes, respectively. Since the neocognitron can recognize shifted objects, the use of log-spiral images by the neocognitron enables the system to recognize scaled, rotated, and translated objects. Two modifications to prior neocognitron implementations are described. A new weight reinforcement method is introduced which solves a significant training problem for the neocognitron. A method of reducing training time is also introduced which specifies the initial layer of weights in the network. All subsequent layers are trained using unsupervised learning. Simulation results using 32*32 and 64*64 intercontinental ballistic missile (ICBM) images are presented. >