Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuk Hin Chan is active.

Publication


Featured researches published by Yuk Hin Chan.


image and vision computing new zealand | 2013

Multi-Kinect scene reconstruction: Calibration and depth inconsistencies

Roy Sirui Yang; Yuk Hin Chan; Rui Gong; Minh Nguyen; Alfonso Gastelum Strozzi; Patrice Delmas; Georgy L. Gimel'farb; Rachel Ababou

We investigated calibration procedures of multiple Kinect sensors simultaneously. Through standard calibration algorithms our multi-Kinect system is accurately registered. We propose and implemented a multi-Kinect system to seamlessly render a scene from a wide range of angles. Such a system is capable of real-time operation. We also investigated the problem of inconsistent depth measurement between different Kinect units, and arrived at the same conclusion as the state-of-the-art regarding the characteristics of depth measurement errors in the Kinect sensor. In order to compensate these errors for live acquisition and display of multi-Kinect systems, we introduced an offline ICP calibration of multiple Kinect data. Our experimental results provide a robust way to properly align multiple Kinect data.


image and vision computing new zealand | 2013

Symmetric dynamic programming stereo using block matching guidance

Minh Nguyen; Yuk Hin Chan; Patrice Delmas; Georgy L. Gimel'farb

In this paper, three stereo matching algorithms are investigated: Block Matching Stereo (BMS) represents local area matching techniques, Symmetric Dynamic Programming Stereo (SDPS) represents semi-global matching, and Graph Cuts Stereo (GCS) represents global matching. Both local and semiglobal methods are relatively fast and feasible for parallel implementation. On the other hand, global GCS obtains a higher stereo matching accuracy but it is very computationally expensive. Thus, we propose a technique that guides SDPS using pre-computed Local-based BMSs result which both restricts and guides the Dynamic Programming search for optimal profile related to BMSs signals. The technique is not only parallel-able for a faster processing speed but also self-repeatable for an enhancement of the 3D reconstruction. Our experimental results show that this proposed technique is superior to the Global Graph Cuts Stereo (GCS) in both speed and accuracy. As well as reconstructing relatively accurate results from the infamous Middlebury datasets, this method is also demonstrated to be fast, robust, and reliable in reconstructing high quality 3D scenes from real-life stereo images.


2013 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL MODELS FOR LIFE SCIENCES | 2013

Contrast/offset-invariant generic low-order MGRF models of uniform textures

Ni Liu; Georgy L. Gimel'farb; Patrice Delmas; Yuk Hin Chan

Statistical properties of many textured objects on digital biomedical images are often nearly translation-invariant, except for sizeable spatially-variant perceptive (contrast and offset) deviations due to different imaging conditions and/or contrast agents. To make widely-used translation-invariant Markov-Gibbs random field (MGRF) models of uniform textures more suitable for biomedical objects, we introduce, in the context of semi-supervised texture recognition, a new class of generic low-order MGRFs. These models account for ordinal relations between signals, rather than signal magnitudes, and therefore are invariant also to arbitrary perceptive signal deviations. Since the numbers of the possible ordinal relations are considerably smaller, than of signal co-occurrences, our earlier fast framework for learning generic 2nd-order MGRFs with multiple translation-invariant pixel/voxel interactions is easily extended up to the 4th-or even higher-order ordinal models. To explore the class introduced, the lear...


new zealand chapter's international conference on computer-human interaction | 2012

Bimanual natural user interaction for 3D modelling application using stereo computer vision

Roy Sirui Yang; Anthony Lau; Yuk Hin Chan; Alfonso Gastelum Strozzi; Christof Lutteroth; Patrice Delmas

This paper presents a system that allows the user to perform 3D modelling and sculpting using postures and 3D movements of their hands. The system utilises the concept of a Natural User Interface using computer vision techniques. This enables the user to operate 3D modelling software. The systems bimanual control allows left hand postures to select control mode commands, while the right hand controls movements. To evaluate the real world performance of the concept of motion and hand-posture-based control in 3D modelling, a usability test with 10 people was conducted. Participants were asked to perform test tasks that involved moving an object in 3D space. These participants performed the tasks multiple times while being timed, both with the mouse and using the 3D hand tracking system. The results indicated that participants who used the hand tracking system completed the tasks more quickly than those who used the mouse. However, approximately half of the participants reported that they found it easier to use the mouse than the hand tracking system. Overall, the participants reported that they enjoyed using the system.


image and vision computing new zealand | 2012

The Ngongotaha river UDPS experiment: low-cost underwater dynamic stereo photogrammetry

Yuk Hin Chan; Minh Nguyen; Alfonso Gastelum; S. Yang; Rui Gong; Ni Liu; Patrice Delmas; Georgy L. Gimel'farb; Stephane Bertin; Heide Friedrich

We propose to integrate the newest developments in stereomatching theory, affordable parallel processing capabilities (using GPU e.g. PC gaming/graphic card) and statistical surface analysis to implement and test an in-situ Underwater Dynamic Stereo Photogrammetry (UDSP) system for civil engineering applications. The proposed UDPS system aims to provide underwater Digital Elevation Models (DEM), for applications such as a two-dimensional discrete matrix of data underwater elevations. Experiments on river bed stereophotogrammetry in the Ngongotaha Stream near Rotorua using consumer grade stereo cameras including Go-Pro and Fujifilm W3 are used in through-water and underwater calibration and stereo measurements of 32 pebbles on the river bed. Pebbles are measured and identified. Initial results highlight the need for specialised equipment for through-water and underwater photogrammetry experiments to limit blurring effects caused by the water-plastic-air interfaces. Despite poor optical quality of the images obtained, we were able to correlate pebble sizes from calibrated stereo depth maps and actual measurement.


computer analysis of images and patterns | 2009

Accurate 3D Modelling by Fusion of Potentially Reliable Active Range and Passive Stereo Data

Yuk Hin Chan; Patrice Delmas; Georgy L. Gimel'farb; Robert Valkenburg

Possibilities of more accurate digital modelling of 3D scenes by fusing 3D range data from an active hand-held laser scene scanner developed in IRL and passive stereo data from stereo pairs of images of the scene collected during the scanning process are discussed. Complementary properties of two data sources allow for improving a 3D model by checking reliability of active range data and using it to adaptively guide passive stereo reconstruction. Experiments show that this avenue of the data fusion offers good prospects of error detection and correction.


Journal of Visual Communication and Image Representation | 2012

An interactive 3D video system for human facial reconstruction and expression modeling

Alexander Woodward; Patrice Delmas; Yuk Hin Chan; Alfonso Gastelum Strozzi; Georgy L. Gimel'farb; Jorge Flores

A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individuals 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.


image and vision computing new zealand | 2008

On fusion of active range data and passive stereo data for 3D scene modelling

Yuk Hin Chan; Patrice Delmas; Georgy L. Gimel'farb; R. Valkenburg

The paper discusses initial steps towards efficient digital 3D modelling of a natural scene by fusing 3D range data from an active hand-held laser scanner and complementary passive stereo data from stereo pairs of images of the scene. The latter are formed by rectifying successive video frames captured by a calibrated built-in video camera of the scanner. Range data constrain the search for stereo correspondence and thus improve the accuracy of the stereo data on smooth continuous surfaces with uniform or repetitive textures, whereas stereo data allows for detecting possible errors in the range data at surface discontinuities and low-reflection areas. Experiments suggest that the data fusion offers good prospects of more accurate scene modelling with automatic error detection and correction.


XIII MEXICAN SYMPOSIUM ON MEDICAL PHYSICS | 2014

Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

Leticia López; Alfonso Gastelum; Yuk Hin Chan; Patrice Delmas; Lilia Escorcia; Jorge Márquez

Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, e...


image and vision computing new zealand | 2012

Context-driven composite stereo reconstruction

Minh Nguyen; Rui Gong; Yuk Hin Chan; Patrice Delmas; Georgy L. Gimel'farb

With respect to the menagerie of possible observed 3D scenes, no algorithm today for reconstructing such a scene from a stereo pair of images is uniformly better than all the others by their accuracy and processing speed. Generally, appropriate stereo reconstruction algorithms should be selected in accord with a type, or context of the scene and in many cases different parts of the same scene could be reconstructed most accurately by using different algorithms. To qualitatively explore this problem, we collected a database of more than 1,500 stereo pairs of natural and artificial indoor and outdoor 3D scenes, arranged into 25 types, such as animals, bars, city roads, city trees, classrooms, coasts, corridors, etc. The images were processed with two algorithms: the 2D graph-cut stereo (2DGCS) and the 1D belief propagation stereo (1DBPS) -- with automatically estimated parameters. The obtained depth maps were visually evaluated and compared by a number of independent human observers. Although in literature the 2DGCS is usually considered as more accurate than the 1DBPS, its depth maps were preferred by the observers in these experiments only for about 58% of the images and 15 out of the 25 types of scenes. The fast and easily parallelised 1DBPS restores smooth continuous curved surfaces but with noisy object boundaries due to horizontal streaks, whereas the much slower and intrinsically sequential 2DGCS returns flattened depth maps with distinctive object boundaries. Based on these results, we implemented context recognition to examine an input scene and allocate the most suitable algorithm and proposed to combine the 2DGCS and 1DBPS into a composite stereo reconstruction technique, which is qualitatively superior than each individual algorithm and is able to return better results and with good speed.

Collaboration


Dive into the Yuk Hin Chan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minh Nguyen

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Gong

University of Auckland

View shared research outputs
Top Co-Authors

Avatar

Anthony Lau

University of Auckland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge