Jacky C. K. Chow
University of Calgary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacky C. K. Chow.
IEEE Access | 2013
Jacky C. K. Chow; Derek D. Lichti
The Kinect system is arguably the most popular 3-D camera technology currently on the market. Its application domain is vast and has been deployed in scenarios where accurate geometric measurements are needed. Regarding the PrimeSense technology, a limited amount of work has been devoted to calibrating the Kinect, especially the depth data. The Kinect is, however, inevitably prone to distortions, as independently confirmed by numerous users. An effective method for improving the quality of the Kinect system is by modeling the sensors systematic errors using bundle adjustment. In this paper, a method for modeling the intrinsic and extrinsic parameters of the infrared and colour cameras, and more importantly the distortions in the depth image, is presented. Through an integrated marker-and feature-based self-calibration, two Kinects were calibrated. A novel approach for modeling the depth systematic errors as a function of lens distortion and relative orientation parameters is shown to be effective. The results show improvements in geometric accuracy up to 53% compared with uncalibrated point clouds captured using the popular software RGBDemo. Systematic depth discontinuities were also reduced and in the check-plane analysis the noise of the Kinect point cloud was reduced by 17%.
Journal of Surveying Engineering-asce | 2012
Derek D. Lichti; Sonam Jamtsho; Sherif Ibrahim El-Halawany; Hervé Lahamy; Jacky C. K. Chow; Ting On Chan; Mamdouh El-Badry
AbstractRange cameras offer great potential for the measurement of structural deformations because of their ability to directly measure video sequences of three-dimensional coordinates of entire surfaces, their compactness, and their relatively low cost compared with other active imaging technologies such as terrestrial laser scanners. Identified limitations of range cameras for high-precision metrology applications such as deformation measurement include the high (centimeter level) noise level and scene-dependent errors. This paper proposes models and methodologies to overcome these limitations and reports on the use of a SwissRanger SR4000 range camera for the measurement of deflections in concrete beams subjected to flexural load-testing. Results from three separate tests show that submillimeter precision and accuracy—assessed by comparison with estimates derived from terrestrial laser scanner data—can be achieved. The high-accuracy range camera results were realized by eliminating the systematic, scen...
Robotics | 2014
Jacky C. K. Chow; Derek D. Lichti; Jeroen D. Hol; Giovanni Bellusci; Henk Luinge
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization.
Sensors | 2013
Jacky C. K. Chow; Derek D. Lichti; Craig L. Glennie; Preston J. Hartzell
Terrestrial laser scanners are sophisticated instruments that operate much like high-speed total stations. It has previously been shown that unmodelled systematic errors can exist in modern terrestrial laser scanners that deteriorate their geometric measurement precision and accuracy. Typically, signalised targets are used in point-based self-calibrations to identify and model the systematic errors. Although this method has proven its effectiveness, a large quantity of signalised targets is required and is therefore labour-intensive and limits its practicality. In recent years, feature-based self-calibration of aerial, mobile terrestrial, and static terrestrial laser scanning systems has been demonstrated. In this paper, the commonalities and differences between point-based and plane-based self-calibration (in terms of model identification and parameter correlation) are explored. The results of this research indicate that much of the knowledge from point-based self-calibration can be directly transferred to plane-based calibration and that the two calibration approaches are nearly equivalent. New network configurations, such as the inclusion of tilted scans, were also studied and prove to be an effective means for strengthening the self-calibration solution, and improved recoverability of the horizontal collimation axis error for hybrid scanners, which has always posed a challenge in the past.
Sensors | 2014
Xiaojuan Qi; Derek D. Lichti; Mamdouh El-Badry; Jacky C. K. Chow; Kathleen Ang
The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinects depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively.
Journal of Surveying Engineering-asce | 2014
Derek D. Lichti; Jacky C. K. Chow; Edson Aparecido Mitishita; Jorge Antonio Silva Centeno; Felipe Martins Marques da Silva; Roberto Arocha Barrios; Ilich Contreras
AbstractThe geometric calibration of time-of-flight range cameras is a necessary quality assurance measure performed to estimate the interior orientation parameters. Self-calibration from a network of range imagery of an array of signalized targets arranged in one or two planes can be used for this purpose. The latter configuration requires the addition of a parametric model for internal light scattering biases in the range observations to the background plane due to the presence of the foreground plane. In a previous study of MESA Imaging SwissRanger range cameras, such a model was developed and shown to be effective. A new parametric model is proposed here because the scattering error behavior is camera model dependent. The new model was tested on two pmdtechnologies range cameras, the CamCube 3.0 and CamBoard nano, and its effectiveness was demonstrated both graphically and statistically. The improvement gained in the root-mean square of the self-calibration range residuals of 22 and 32%, respectively,...
Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection | 2013
Jacky C. K. Chow; Derek D. Lichti
Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world’s most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences | 2018
Jacky C. K. Chow; Ivan Detchev; Kathleen Ang; Kristian Morin; Karthik Mahadevan; Nicholas Louie
Abstract. Visual perception is regularly used by humans and robots for navigation. By either implicitly or explicitly mapping the environment, ego-motion can be determined and a path of actions can be planned. The process of mapping and navigation are delicately intertwined; therefore, improving one can often lead to an improvement of the other. Both processes are sensitive to the interior orientation parameters of the camera system and mathematically modelling these systematic errors can often improve the precision and accuracy of the overall solution. This paper presents an automatic camera calibration method suitable for any lens, without having prior knowledge about the sensor. Statistical inference is performed to map the environment and localize the camera simultaneously. K-nearest neighbour regression is used to model the geometric distortions of the images. A normal-angle lens Nikon camera and wide-angle lens GoPro camera were calibrated using the proposed method, as well as the conventional bundle adjustment with self-calibration method (for comparison). Results showed that the mapping error was reduced from an average of 14.9 mm to 1.2 mm (i.e. a 92 % improvement) and 66.6 mm to 1.5 mm (i.e. a 98 % improvement) using the proposed method for the Nikon and GoPro cameras, respectively. In contrast, the conventional approach achieved an average 3D error of 0.9 mm (i.e. 94 % improvement) and 6 mm (i.e. 91 % improvement) for the Nikon and GoPro cameras, respectively. Thus, the proposed method performs more consistently, irrespective of the lens/sensor used: it yields results that are comparable to the conventional approach for normal-angle lens cameras, and it has the additional benefit of improving calibration results for wide-angle lens cameras.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences | 2017
Jacky C. K. Chow
Abstract. Sensor fusion of a MEMS IMU with a magnetometer is a popular system design, because such 9-DoF (degrees of freedom) systems are capable of achieving drift-free 3D orientation tracking. However, these systems are often vulnerable to ambient magnetic distortions and lack useful position information; in the absence of external position aiding (e.g. satellite/ultra-wideband positioning systems) the dead-reckoned position accuracy from a 9-DoF MEMS IMU deteriorates rapidly due to unmodelled errors. Positioning information is valuable in many satellite-denied geomatics applications (e.g. indoor navigation, location-based services, etc.). This paper proposes an improved 9-DoF IMU indoor pose tracking method using batch optimization. By adopting a robust in-situ user self-calibration approach to model the systematic errors of the accelerometer, gyroscope, and magnetometer simultaneously in a tightly-coupled post-processed least-squares framework, the accuracy of the estimated trajectory from a 9-DoF MEMS IMU can be improved. Through a combination of relative magnetic measurement updates and a robust weight function, the method is able to tolerate a high level of magnetic distortions. The proposed auto-calibration method was tested in-use under various heterogeneous magnetic field conditions to mimic a person walking with the sensor in their pocket, a person checking their phone, and a person walking with a smartwatch. In these experiments, the presented algorithm improved the in-situ dead-reckoning orientation accuracy by 79.8–89.5 % and the dead-reckoned positioning accuracy by 72.9–92.8 %, thus reducing the relative positioning error from metre-level to decimetre-level after ten seconds of integration, without making assumptions about the user’s dynamics.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences | 2017
Jacky C. K. Chow
Abstract. In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).