Guozhen Xu
Agency for Science, Technology and Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guozhen Xu.
international conference on image processing | 2009
Wei Xiong; Sim Heng Ong; Qi Tian; Guozhen Xu; Jiayin Zhou; Jiang Liu; S. K. Venkatash
The construction of probabilistic liver atlases has received little attention in the past. Existing methods are based on landmarks and are sensitive to their choices and placements. We propose an iterative landmark-free method based on dense volumes to construct linear unbiased diffeomorphic probabilistic atlases from liver CT images. The linear averaging of the transformed images is set as the common target space followed by pairwise diffeomorphic registrations to warp all images to the target using a recent-proposed efficient deformation approach during each iteration cycle. Iterative pairwise registrations are directly used to handle possible large deformations without the need for an extra step to remove global deformations such as the use of affine transformations in traditional methods. Compared with those approaches estimating the unbiased atlas and the transformations groupwise simultaneously, the current method is more efficient. The efficiency and the convergence of our method have been demonstrated experimentally by validation using 25 CT liver sets.
computer vision and pattern recognition | 2015
Jimmy Addison Lee; Jun Cheng; Beng Hai Lee; Ee Ping Ong; Guozhen Xu; Damon Wing Kee Wong; Jiang Liu; Augustinus Laude; Tock Han Lim
Existing feature descriptor-based methods on retinal image registration are mainly based on scale-invariant feature transform (SIFT) or partial intensity invariant feature descriptor (PIIFD). While these descriptors are often being exploited, they do not work very well upon unhealthy multimodal images with severe diseases. Additionally, the descriptors demand high dimensionality to adequately represent the features of interest. The higher the dimensionality, the greater the consumption of resources (e.g. memory space). To this end, this paper introduces a novel registration algorithm coined low-dimensional step pattern analysis (LoSPA), tailored to achieve low dimensionality while providing sufficient distinctiveness to effectively align unhealthy multimodal image pairs. The algorithm locates hypotheses of robust corner features based on connecting edges from the edge maps, mainly formed by vascular junctions. This method is insensitive to intensity changes, and produces uniformly distributed features and high repeatability across the image domain. The algorithm continues with describing the corner features in a rotation invariant manner using step patterns. These customized step patterns are robust to non-linear intensity changes, which are well-suited for multimodal retinal image registration. Apart from its low dimensionality, the LoSPA algorithm achieves about two-fold higher success rate in multimodal registration on the dataset of severe retinal diseases when compared to the top score among state-of-the-art algorithms.
international conference of the ieee engineering in medicine and biology society | 2016
Ee Ping Ong; Jimmy Addison Lee; Guozhen Xu; Beng Hai Lee; Damon Wing Kee Wong
This paper presents a novel automatic quantitative measurement method for assessment of the performance of image registration algorithms designed for registering retina fundus images. To achieve automatic quantitative measurement, we propose the use of edges and edge dissimilarity measure for determining the performance of retina image registration algorithms. Our input is the registered pair of retina fundus images obtained using any of the existing retina image registration algorithms in the literature. To compute edge dissimilarity score, we propose an edge dissimilarity measure that we called “robustified Hausdorff distance”. We show that our proposed approach is feasible as designed by drawing comparison to visual evaluation results when tested on images from the DRIVERA and G9 dataset.
international conference of the ieee engineering in medicine and biology society | 2014
Jimmy Addison Lee; Beng Hai Lee; Guozhen Xu; Ee Ping Ong; Damon Wing Kee Wong; Jiang Liu; Tock Han Lim
This paper presents a novel approach of finding corner features between retinal fundus images. Such images are relatively textureless and comprising uneven shades which render state-of-the-art approaches e.g., SIFT to be ineffective. Many of the detected features have low repeatability (<; 10%), especially when the viewing angle difference in the corresponding images is large. Our approach is based on the finding of blood vessels using a robust line fitting algorithm, and locating corner features based on the bends and intersections between the blood vessels. These corner features have proven to be superior to the state-of-the-art feature extraction methods (i.e. SIFT, SURF, Harris, Good Features To Track (GFTT) and FAST) with regard to repeatability and stability in our experiment. Overall in average, the approach has close to 10% more repeatable detected features than the second best in two corresponding retinal images in the experiment.
medical image computing and computer assisted intervention | 2015
Jimmy Addison Lee; Jun Cheng; Guozhen Xu; Ee Ping Ong; Beng Hai Lee; Damon Wing Kee Wong; Jiang Liu
Existing feature descriptor-based methods on retinal image registration are mainly based on scale-invariant feature transform (SIFT) or partial intensity invariant feature descriptor (PIIFD). While these descriptors are many times being exploited, they have not been applied to color fundus and optical coherence tomography (OCT) fundus image pairs. OCT fundus images are challenging to register as they are often degraded by speckle noise. The descriptors also demand high dimensionality to adequately represent the features of interest. To this end, this paper presents a registration algorithm coined low-dimensional step pattern analysis (LoSPA), tailored to achieve low dimensionality while providing sufficient distinctiveness to effectively register OCT fundus images with color fundus photographs. The algorithm locates hypotheses of robust corner features based on connecting edges from the edge maps, mainly formed by vascular junctions. It continues with describing the corner features in a rotation invariant manner using step patterns. These customized step patterns are insensitive to intensity changes. We conduct comparative evaluation and LoSPA achieves a higher success rate in registration when compared to the state-of-the-art algorithms.
international conference of the ieee engineering in medicine and biology society | 2015
Ee Ping Ong; Jimmy Addison Lee; Jun Cheng; Beng Hai Lee; Guozhen Xu; Augustinus Laude; Stephen Charn Beng Teoh; Tock Han Lim; Damon Wing Kee Wong; Jiang Liu
This paper presents a novel augmented reality assistance platform for eye laser surgery. The aims of the proposed system are for the application of assisting eye doctors in pre-planning as well as providing guidance and protection during laser surgery. We developed algorithms to automatically register multi-modal images, detect macula and optic disc regions, and demarcate these as protected areas from laser surgery. The doctor will then be able to plan the laser treatment pre-surgery using the registered images and segmented regions. Thereafter, during live surgery, the system will automatically register and track the slit lamp video frames on the registered retina images, send appropriate warning when the laser is near protected areas, and disable the laser function when it points into the protected areas. The proposed system prototype can help doctors to speed up laser surgery with confidence without fearing that they may unintentionally fire laser in the protected areas.
medical image computing and computer assisted intervention | 2015
Ee Ping Ong; Jimmy Addison Lee; Jun Cheng; Guozhen Xu; Beng Hai Lee; Augustinus Laude; Stephen Charn Beng Teoh; Tock Han Lim; Damon Wing Kee Wong; Jiang Liu
This paper presents a robust outlier elimination approach for multimodal retina image registration application. Our proposed scheme is based on the Scale-Invariant Feature Transform (SIFT) feature extraction and Partial Intensity Invariant Feature Descriptors (PIIFD), and we combined with a novel outlier elimination approach to robustly eliminate incorrect putative matches to achieve better registration results. Our proposed approach, which we will henceforth refer to as the residual-scaled-weighted Least Trimmed Squares (RSW-LTS) method, has been designed to enforce an affine transformation geometric constraint to solve the problem of image registration when there is very high percentage of incorrect matches in putatively matched feature points. Our experiments on registration of fundus-fluorescein angiographic image pairs show that our proposed scheme significantly outperforms the Harris-PIIFD scheme. We also show that our proposed RSW-LTS approach outperforms other outlier elimination approaches such as RANSAC (RANdom SAmple Consensus) and MSAC (M-estimator SAmple and Consensus).
Ophthalmic Medical Image Analysis Third International Workshop | 2016
Ee Ping Ong; Jun Cheng; Ying Quan; Guozhen Xu; Damon Wing Kee Wong
Ophthalmic Medical Image Analysis Third International Workshop | 2016
Jun Cheng; Jimmy Addison Lee; Guozhen Xu; Ying Quan; Ee Ping Ong; Damon Wing Kee Wong
Investigative Ophthalmology & Visual Science | 2016
Damon Wing Kee Wong; Jimmy Addison Lee; Beng Hai Lee; Guozhen Xu; Augustinus Laude; Tock Han Lim