David Vernon
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Vernon.
international conference on robotics and automation | 1990
David Vernon; Massimo Tistarelli
A technique is described for determining a depth map of parts in bins using optical flow derived from camera motion. Simple programmed camera motions are generated by mounting the camera on the robot end effector and directing the effector along a known path. The results achieved using two simple trajectories, where one is along the optical axis and the other is in rotation about a fixation point, are detailed. Optical flow is estimated by computing the time derivative of a sequence of images, i.e. by forming differences between two successive images and, in particular, matching between contours in images that have been generated from the zero crossings of Laplacian of Gaussian-filtered images. Once the flow field has been determined, a depth map is computed utilizing the parameters of the known camera trajectory. Empirical results are presented for a calibration object and two bins of parts; these are compared with the theoretical precision of the technique, and it is demonstrated that a ranging accuracy on the order of two parts in 100 is achievable. >
Neural Computing and Applications | 1994
Marggie Jones; David Vernon
The work presented in this paper shows how the association of proprioceptive and exteroceptive stimuli can enable a Kohonen neural network, controlling a robot arm, to learn hand-eye co-ordination so that the arm can reach for and track a visually presented target. The approach presented in this work assumes no a priorimodel of arm kinematics or of the imaging characteristics of the cameras. No explicit representation, such as homogeneous transformations, is used for the specification of robot pose, and camera calibration and triangulation are done implicitly as the system adapts and learns its hand-eye co-ordination by experience. This research is validated on physical devices and not by simulation.
International Journal of Imaging Systems and Technology | 1994
Kenneth M. Dawson-Howe; David Vernon
A large number of cameras may be modeled quite accurately using the simple pinhole camera model, which may be defined either in terms of camera parameters or by the c matrix (which defines the mapping from 3D points to the image plane). We present formulations of the associated transformations between these two equivalent representations. We also introduce an inexpensive technique for calibrating a camera using a single two‐plane calibration object, and employ a novel high‐precision Hough transform technique for determinings calibration grid lines.©1994 John Wiley & Sons Inc
Image and Vision Computing | 1987
David Vernon
Abstract Most simple algorithms for generating 2D shape descriptors, whether they are based on boundary features or regional features, require that the complete object is visible. This paper introduces a simple boundary-based shape descriptor, the normal contour distance (NCD) signature, which does not require knowledge of the complete boundary and is suitable for recognition of partially occluded objects. The ability of the NCD descriptor to discriminate between standard 2D shapes has been tested and details are provided of the results obtained using 50%, 50%, 70%, 80%, 90% and 100% of the complete object boundary.
Software - Practice and Experience | 1988
David Vernon; Giulio Sandini
Image understanding is concerned with the elucidation of a computational base inherent in perceiving a three‐dimensional world using vision. This paper describes a low‐level (or early) vision software system, developed in the context of current collaborative research activities in image understanding, which goes some way toward fulfilling the goals of portability, ease of use, and general‐purpose extensibility. Since visual perception uses several types of disparate, but interrelated, information in some explicit cognitive organization, a central objective of the work is to represent this information in a coherent integrated manner which allows one interactively to investigate the properties of the interdependency between information types.
Robotica | 1990
David Vernon
A prototype robot system for automated handling of flexible electrical wires of variable length is described. The handling process involves the selection of a single wire from a tray of many, grasping the wire close to its end with a robot manipulator, and either placing the end in a crimping press or, for tinning applications, dipping the end in a bath of molten solder. This system relies exclusively on the use of vision to identify the position and orientation of the wires prior to their being grasped by the robot end-effector. Two distinct vision algorithms are presented. The first approach utilises binary imaging techniques and involves object segmentation by thresholding followed by thinning and image analysis. An alternative general-purpose approach, based on more robust grey-scale processing techniques, is also described. This approach relies in the analysis of object boundaries generated using a dynamic contour-following algorithm. A simple Robot Control Language ( RCL ) is described which facilitates robot control in a Cartesian frame of reference and object description using frames (homogeneous transformations). The integration of this language with the robot vision system is detailed, and, in particular, a camera model which compensates for both photometric distortion and manipulator inaccuracies is presented. The system has been implemented using conventional computer architectures; average sensing cycle times of two and six seconds have been achieved for the grey-scale and binary vision algorithms, respectively.
Robotics | 1992
Kenneth Dawson; David Vernon
Three dimensional object recognition is an essential capability for any advanced machine vision system. We present a new technique for the recognition of 3-D objects on the basis of comparisons between 3-D models. Secondary representations of the models, which may be considered as complex scalar transform descriptors, are employed. The use of these representations overcomes the common dependency of matching individual model primitives (such as edges or surfaces). The secondary representations used are one-dimensional histograms of components of the visible orientations, depth maps and needle diagrams. Matching is achieved using template matching and normalized correlation techniques between the secondary representations. We demonstrate the power of this new technique with several examples of object recognition of models derived from actively sensed range data.
International Journal of Pattern Recognition and Artificial Intelligence | 1995
Kenneth M. Dawson-Howe; David Vernon
A new approach to object recognition is presented, in which secondary representations of 3-D models are synthesized/derived (in various forms) and subsequently compared in order to invoke views of models, tune model pose and verify recognition hypotheses. The use of these secondary representations allows complex models (e.g. surface-based or volumetric models) to be compared implicitly (rather than explicitly comparing the component primitives of the models). This in turn overcomes the problem of the stability of the model primitives, and provides independence between the complex 3-D representations and the recognition strategy (i.e. the invocation, matching and verification techniques). The secondary representations employed are Extended Gaussian Images, directional histograms, needle diagrams, depth maps and boundary curvature signatures. The technique is demonstrated using models, of reasonably complex objects, derived from actively sensed range data.
intelligent robots and systems | 1991
Kenneth Dawson; Massimo Tistarelli; David Vernon
This paper describes experience gained in using optical flow fields, arising from constrained camera motion, to estimate range information, for robotic part manipulation. Several key issues in robot vision are addressed. These include the computation of optical flow, computation of depth, interpolation of depth values, model construction, object recognition, and pose estimation. The paper provides an example of the approach and evaluates the systems performance in the context of its applicability to part manipulation.<<ETX>>
Applications of Digital Image Processing X | 1988
David Vernon; Massimo Tistarelli
The central requirement in the bin-of-parts problem is to direct a robot manipulator to select, grasp, and remove an arbitrarily-oriented part (or object) from a bin of many such objects. This necessitates the estimation of the pose (position and orientation) of a partially-occluded object and, in general, its 3-D structure. The solution of such a problem using passive vision requires the use of sophisticated processing incorporating multiple redundant representations, such as stereopsis, motion, and analysis of object shading. This paper describes the first step is such an approach, that of determining a depth-map of the bin-of-parts, using optical flow derived from camera motion. Since the robotics environment is naturally constrained, simple camera motion can be generated by mounting the camera on the robot end-effector and directing the effector along a known path: the simplest motion, along the optical axis, is utilised in this case. For motion along the optical axis, the rotational components of flow are nil and the direction of the translational components is effectively radial from the fixation point (on the optical axis). Hence, it remains only to determine the magnitude of the velocity vector. Optical flow is estimated by computing the time derivative of a sequence of images, i.e., by forming differences between two successive images and, in particular, of contours in images which have been generated from the zero-crossings of Laplacian of Gaussian-filtered images. Once the flow field has been determined, a depth map is computed, initially for all contour points in the image, and ultimately for all surface points by interpolation.