John Mallon
Dublin City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Mallon.
Image and Vision Computing | 2005
John Mallon; Paul F. Whelan
This paper describes a direct, self-contained method for planar image rectification of stereo pairs. The method is based solely on an examination of the Fundamental matrix, where an improved method is given for the derivation of two projective transformations that horizontally align all the epipolar projections. A novel approach is proposed to uniquely optimise each transform in order to minimise perspective distortions. This ensures the rectified images resemble the original images as closely as possible. Detailed results show that the rectification precision exactly matches the estimation error of the Fundamental matrix. In tests the remaining perspective distortion offers on average less than one percent viewpoint distortion. Both these factors offer superior robustness and performance compared with existing techniques.
Pattern Recognition Letters | 2007
John Mallon; Paul F. Whelan
This paper provides a comparative study on the use of planar patterns in the generation of control points for camera calibration. This is an important but often neglected aspect in camera calibration. Two popular checkerboard and circular dot patterns are each examined with two detection strategies for invariance to the potential bias from projective transformations and nonlinear distortions. It is theoretically and experimentally shown that circular patterns can potentially be affected by both biasing sources. Guidelines are given to control such bias. In contrast, appropriate checkerboard detection is shown to be bias free. The findings have important implications for camera calibration, indicating that well accepted methods may give poorer results than necessary if applied naively.
international conference on pattern recognition | 2004
John Mallon; Paul F. Whelan
Radial image distortion is a frequently observed defect when using wide angle, low focal length lenses. In this paper a new method for its calibration and removal is presented. An inverse distortion model is derived that is accurate to a sub-pixel level, over a broad range of distortion levels. An iterative technique for estimating the models parameters from a single view is also detailed. Results on simulated and real images clearly indicate significantly improved performance compared to existing methods.
Pattern Recognition Letters | 2007
John Mallon; Paul F. Whelan
This paper addresses the problem of compensating for lateral chromatic aberration in digital images through colour plane realignment. Two main contributions are made: the derivation of a model for lateral chromatic aberration in images, and the subsequent calibration of this model from a single view of a chess pattern. These advances lead to a practical and accurate alternative for the compensation of lateral chromatic aberrations. Experimental results validate the proposed models and calibration algorithm. The effects of colour channel correlations resulting from the camera colour filter array interpolation is examined and found to have a negligible magnitude relative to the chromatic aberration. Results with real data show how the removal of lateral chromatic aberration significantly improves the colour quality of the image.
Journal of Electronic Imaging | 2005
Ovidiu Ghita; Paul F. Whelan; John Mallon
Active depth from defocus (DFD) eliminates the main limitation faced by passive DFD, namely its inability to recover depth when dealing with scenes defined by weakly textured (or textureless) objects. This is achieved by projecting a dense illumination pattern onto the scene and depth can be recovered by measuring the local blurring of the projected pattern. Since the illumination pattern forces a strong dominant texture on imaged surfaces, the level of blurring is determined by applying a local operator (tuned on the frequency derived from the illumination pattern) as opposed to the case of window-based passive DFD where a large range of band pass operators are required. The choice of the local operator is a key issue in achieving precise and dense depth estimation. Consequently, in this paper we introduce a new focus operator and we propose refinements to compensate for the problems associated with a suboptimal local operator and a nonoptimized illumination pattern. The developed range sensor has been tested on real images and the results demonstrate that the performance of our range sensor compares well with those achieved by other implementations, where precise and computationally expensive optimization techniques are employed.
international conference on computer vision | 2007
Aubrey K. Dunne; John Mallon; Paul F. Whelan
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method.
Computer Vision and Image Understanding | 2010
Aubrey K. Dunne; John Mallon; Paul F. Whelan
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed such that both central and non-central cameras can be calibrated within the same framework. Consequently, existing parametric calibration techniques cannot be applied for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection in order to enable the application of established pinhole calibration techniques. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A novel linear estimation stage is proposed that enables a well established pinhole calibration technique to be used to estimate the camera centre and initial grid poses. The proposed solution is shown to be more accurate than the linear estimation stage of the standard method. A linear alternative to the existing polynomial method for estimating the pose of additional grids used in the calibration is demonstrated and evaluated. Distortion correction experiments are conducted with real data for both an omnidirectional camera and a fisheye camera using the standard and proposed methods. Motion reconstruction experiments are also undertaken for the omnidirectional camera. Results show the accuracy and robustness of the proposed method to be improved over those of the standard method.
machine vision applications | 2007
Ovidiu Ghita; Paul F. Whelan; David Vernon; John Mallon
In this paper we present a novel method for estimating the object pose for 3D objects with well-defined planar surfaces. Specifically, we investigate the feasibility of estimating the object pose using an approach that combines the standard eigenspace analysis technique with range data analysis. In this sense, eigenspace analysis was employed to constrain one object rotation and reject surfaces that are not compatible with a model object. The remaining two object rotations are estimated by computing the normal to the surface from the range data. The proposed pose estimation scheme has been successfully applied to scenes defined by polyhedral objects and experimental results are reported.
Opto-Ireland 2002: Optical Metrology, Imaging, and Machine Vision | 2003
John Mallon; Ovidiu Ghita; Paul F. Whelan
Position determination and verification of a mobile robot is a central theme in robotics research. Several methods have been proposed for this problem, including the use of visual feedback information. These vision systems typically aim to extract known or tracked landmarks from the environment to localize the robot. Detection and matching these landmarks is often the most computationally expensive and error prone component of the system. This paper presents a real-time system for robustly matching landmarks in complex scenes, with subsequent tracking. The vision system comprises of a trinocular head, from which corner points are extracted. These are then matched with respect to robustness constraints in addition to the trinocular constraints. Finally, the resulting robustly extracted corners are tracked from frame to frame to determine the robots rotational deviations.
international symposium on visual computing | 2008
Seán Begley; John Mallon; Paul F. Whelan
This paper proposes a novel approach to pose removal from face images based on the inherent symmetry that is present in faces. In order for face recognition systems and expression classification systems to operate optimally, subjects must look directly into the camera. The removal of pose from face images after their capture removes this restriction. To obtain a pose-removed face image, the frequency components at each position of the face image, obtained through a wavelet transformation, are examined. A cost function based on the symmetry of this wavelet transformed face image is minimized to achieve pose removal. Experimental results are presented that demonstrate that the proposed algorithm improves upon existing techniques in the literature.