Jerry L. Turney
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jerry L. Turney.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1985
Jerry L. Turney; Trevor N. Mudge; Richard A. Volz
The problem of recognizing an object from a partially occluded boundary image is considered, and the concept of saliency of a boundary segment is introduced. Saliency measures the extent to which the boundary segment distinguishes the object to which it belongs from other objects which might be present. An algorithm is presented which optimally determines the saliency of boundary segments of one object with respect to those of a set of other objects. An efficient template matching algorithm using templates weighted by boundary segment saliency is then presented and employed to recognize partially occluded parts. The results of these experiments illustrate the effectiveness of the new technique.
Robotica | 1987
Trevor N. Mudge; Jerry L. Turney; Richard A. Volz
A method for solving the recognition of partially occluded parts is presented. It is based on the automatic generation of features from a set of primitive features which are configurations of pairs of fixed length segments of boundary edges of the parts. The procedure that creates the recognition features assigns a number in the range (0,1] that indicates the importance of the feature in the recognition strategy. This number is referred to as the features saliency. The method assumes that the parts that can occur in a scene come from a known set of parts. An example illustrates how automatically generated features can be used to count the number of identical parts in a heap.
The International Journal of Robotics Research | 1989
Paul G. Gottschalk; Jerry L. Turney; Trevor N. Mudge
An important task in computer vision is the recognition of partially visible two-dimensional objects in a gray scale image. Recent works addressing this problem have attempted to match spatially local features from the image to features generated by models of the objects. However, many algo rithms are considerably less efficient than they might be, typ ically being O(IN) or worse, where I is the number offeatures in the image and N is the number of features in the model set. This is invariably due to the feature-matching portion of the algorithm. In this paper we discuss an algorithm that significantly improves the efficiency offeature matching. In addition, we show experimentally that our recognition algo rithm is accurate and robust. Our algorithm uses the local shape of contour segments near critical points, represented in slope angle-arclength space (θ-s space), as fundamental fea ture vectors. These feature vectors are further processed by projecting them onto a subspace in θ-s space that is obtained by applying the Karhunen-Loève expansion to all such fea tures in the set of models, yielding the final feature vectors. This allows the data needed to store the features to be re duced, while retaining nearly all information important for recognition. The heart of the algorithm is a technique for performing matching between the observed image features and the precomputed model features, which reduces the runtime complexity from O(IN) to O(I log I + I log N), where I and N are as above. The matching is performed using a tree data structure, called a kD tree, which enables multidi mensional searches to be performed in O(log) time.
international conference on robotics and automation | 1985
Jerry L. Turney; Trevor N. Mudge; Richard A. Volz
In this paper, an approach is described for recognizing and locating partially hidden objects in an image. The method is based upon matching pairs of boundary segments of the template of an object with pairs of boundary segments in the image. Using a Bayesian based signal detection approach, pairs of segments are selected from the template of the object such that the probability of correctly identifying the object given that the pair is matched in the image is close to one. Assuming that models of all objects which might appear in the scene (a reasonable assumption for industrial applications) are known a priori, suitable pairs of segments can be determined a priori. Preliminary investigation suggests that the technique is robust and that subsecond recognition time can be achieved.
international conference on robotics and automation | 1987
Paul G. Gottschalk; Jerry L. Turney; Trevor N. Mudge
An important task in computer vision is the recognition of partially visible two-dimensional objects in a gray scale image. Recent works addressing this problem have attempted to match spatially local features from the image to features generated by models of the objects. However, many algorithms are less efficient than is possible. This is due primarily to insufficient attention being paid to the issues of reducing the data in features and feature matching. In this paper we discuss an algorithm that addresses both of these problems. Our algorithm uses the local shape of contour segments near critical points, represented in slope angle-arclength space (θ-s space), as the fundamental feature vectors. These fundamental feature vectors are further processed by projecting them onto a subspace of θ-s space that is obtained by applying the Karhunen-Loève expansion to all critical points in the model set to obtain the final feature vectors. This allows the data needed to store the features to be reduced, while retaining nearly all their recognitive information. The resultant set of feature vectors from the image are matched to the model set using multidimensional range queries to a database of model feature vectors. The database is implemented using an efficient data-structure called a k-d tree. The entire recognition procedure for one image has complexity O(IlogI + IlogN), where I is the number of features in the image, and N is the number of model features. Experimental results showing our algorithms performance on a number of test images are presented.
Proceedings of SPIE - The International Society for Optical Engineering | 1985
Jerry L. Turney; Trevor N. Mudge; Richard A. Volz
In this paper, an approach is described for recognizing and locating partially hidden objects in an image. In the approach, templates are formed from the edge contours of the objects sought. Segments of each template are matched to segments of the edge contours of the image. A Bayesian approach is used to decide the probability an object has been located given that matches occur.
Biostereometric Technology and Applications | 1991
Jerry L. Turney; Charles D. Lysogorski; Paul G. Gottschalk; Arnold H. Chiu
Geometric measurements of a surface can be encoded in real-time as a set of fringe images using moire projection techniques. However obtaining numerical values from the encoded surface measurements has not been straightforward. To solve this problem we have developed a phase-shift moire camera that captures in real-time a sufficient number (three) of phase-shifted moire patterns to allow the decoding of the moire patterns to be automated. The RIPS Camera uses three simultaneously produced phase-shifted moire fringe images of a scene (above left) to produce data (above right) that can be reduce to range data.
Archive | 1984
Richard A. Volz; Anthony C. Woo; Jan D. Wolter; Trevor N. Mudge; Jerry L. Turney; David A. Gal
This paper addresses two topics which on the surface are unrelated, the use of CAD to assist robot and sensor programming, and the use of Ada as the basis for robot programming. The association between them arises from the fact that they are being combined in an experimental facility. The facility consists of an Intel iAPX 432 multiprocessing microcomputer system, a GE TN2500 camera, an ASEA RB 6 robot and a link to a VAX 11/780 off-line computer system. The facility is being used as a testbed for various robot programming and interface strategies, and to investigate the utility of object based systems as the computer foundation of manufacturing cells. Experimental verification of techniques using information extracted from CAD models to assist in robot programming and the use of Ada are important parts of the experiment.
Sensor Fusion III: 3D Perception and Recognition | 1991
Paul G. Gottschalk; Jerry L. Turney; Arnold H. Chiu; Trevor N. Mudge
Algorithms for recognition, tracking, and pose estimation of 3-d objects in intensity imagery commonly assume that features are the result of imaging surface landmarks. This assumption is most commonly violated when a feature is detected on an occluding boundary generated by a smoothly curving surface. We have developed a method that can recognize, track, and determine the pose of arbitrarily shaped, partially visible 3-d objects in both intensity and range imagery. We describe the results of tests on real intensity imagery and synthetic range imagery.
Archive | 1991
Arnold H. Chiu; Ted Ladewski; Jerry L. Turney
To address the need for fast, accurate interferometric analytical tools, the Fringe Analysis Workstation (FAW) has been developed to analyze complex fringe image data easily and rapidly. FAW has been used for flow studies in aerodynamics and hydrodynamics experiments, and for target shell characterization in inertial confinement fusion research. This paper will describe three major components of the FAW system: input/output, fringe analysis/image processing, and visualization/graphical user interface.