Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaohong W. Gao is active.

Publication


Featured researches published by Xiaohong W. Gao.


Journal of Visual Communication and Image Representation | 2006

Recognition of traffic signs based on their colour and shape features extracted using human vision models

Xiaohong W. Gao; Lubov N. Podladchikova; D. G. Shaposhnikov; Kunbin Hong; Natalia A. Shevtsova

Colour and shape are basic characteristics of traffic signs which are used both by the driver and to develop artificial traffic sign recognition systems. However, these sign features have not been represented robustly in the earlier developed recognition systems, especially in disturbed viewing conditions. In this study, this information is represented by using a human vision colour appearance model and by further developing existing behaviour model of visions. Colour appearance model CIECAM97 has been applied to extract colour information and to segment and classify traffic signs. Whilst shape features are extracted by the development of FOSTS model, the extension of behaviour model of visions. Recognition rate is very high for signs under artificial transformations that imitate possible real world sign distortion (up to 50% for noise level, 50 m for distances to signs, and 5° for perspective disturbances) for still images. For British traffic signs (n = 98) obtained under various viewing conditions, the recognition rate is up to 95%.


British Journal of Ophthalmology | 2001

Computer algorithms for the automated measurement of retinal arteriolar diameters.

Neil Chapman; Nicholas Witt; Xiaohong W. Gao; Anil A. Bharath; Alice Stanton; S Thom; Alun D. Hughes

AIMS Quantification of retinal vascular change is difficult and manual measurements of vascular features are slow and subject to observer bias. These problems may be overcome using computer algorithms. Three automated methods and a manual method for measurement of arteriolar diameters from digitised red-free retinal photographs were compared. METHODS 60 diameters (in pixels) measured by manual identification of vessel edges in red-free retinal images were compared with diameters measured by (1) fitting vessel intensity profiles to a double Gaussian function by non-linear regression, (2) a standard edge detection algorithm (Sobel), and (3) determination of points of maximum intensity variation by a sliding linear regression filter (SLRF). Method agreement was analysed using Bland–Altman plots and the repeatability of each method was assessed. RESULTS Diameter estimations obtained using the SLRF method were the least scattered although diameters obtained were approximately 3 pixels greater than those measured manually. The SLRF method was the most repeatable and the Gaussian method less so. The Sobel method was the least consistent owing to frequent misinterpretation of the light reflex as the vessel edge. CONCLUSION Of the three automated methods compared, the SLRF method was the most consistent (defined as the method producing diameter estimations with the least scatter) and the most repeatable in measurements of retinal arteriolar diameter. Application of automated methods of retinal vascular analysis may be useful in the assessment of cardiovascular and other diseases.


international conference on image processing | 2001

A method of vessel tracking for vessel diameter measurement on retinal images

Xiaohong W. Gao; Anil A. Bharath; Alice Stanton; Alun D. Hughes; Neil Chapman; Simon Thom

A method of vessel tracking has been developed for quantification of vessel diameters of retinal images. This method utilises twin Gaussian functions to model the distribution of grey level over a vessel cross section. The diameter of the vessel at the cross section can then be calculated using the functions. The variation of vessel diameter in the direction of vessel longitude axis has been described by a tracking technique based on parameters of modelled intensity distribution curves over every cross section. This enables us to obtain an average diameter over any length of a vessel and to develop more parameters for diagnosis and study of vascular diseases.


Computer Methods and Programs in Biomedicine | 2000

Quantification and characterisation of arteries in retinal images

Xiaohong W. Gao; Anil A. Bharath; Alice Stanton; Alun D. Hughes; Neil Chapman; Simon Thom

A computerised system is presented for the automatic quantification of blood vessel topography in retinal images. This system utilises digital image processing techniques to provide more reliable and comprehensive information for the retinal vascular network. It applies strategies and algorithms for measuring vascular trees and includes methods for locating the centre of a bifurcation, detecting vessel branches, estimating vessel diameter, and calculating angular geometry at a bifurcation. The performance of the system is studied by comparison with manual measurements and by comparing measurements between red-free images and fluorescein images. In general an acceptable degree of accuracy and precision was seen for all measurements, although the system had difficulty dealing with very noisy images and small or especially tortuous blood vessels.


Computer Methods and Programs in Biomedicine | 2017

Classification of CT brain images based on deep learning networks

Xiaohong W. Gao; Rui Hui; Zengmin Tian

While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimers disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimers disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.


Eurasip Journal on Image and Video Processing | 2008

Colour vision model-based approach for segmentation of traffic signs

Xiaohong W. Gao; Kunbin Hong; Peter J. Passmore; Lubov N. Podladchikova; D. G. Shaposhnikov

This paper presents a new approach to segment traffic signs from the rest of a scene via CIECAM, a colour appearance model. This approach not only takes CIECAM into practical application for the first time since it was standardised in 1998, but also introduces a new way of segmenting traffic signs in order to improve the accuracy of colour-based approach. Comparison with the other CIE spaces, including CIELUV and CIELAB, and RGB colour space is also carried out. The results show that CIECAM performs better than the other three spaces with 94%, 90%, and 85% accurate rates for sunny, cloudy, and rainy days, respectively. The results also confirm that CIECAM does predict the colour appearance similar to average observers.


Medical Imaging and Informatics | 2008

Prototype System for Semantic Retrieval of Neurological PET Images

Stephen Batty; John A. Clark; Tim D. Fryer; Xiaohong W. Gao

Positron Emission Tomography (PET) is used within neurology to study the underlying biochemical basis of cognitive functioning. Due to the inherent lack of anatomical information its study in conjunction with image retrieval is limited. Content based image retrieval (CBIR) relies on visual features to quantify and classify images with a degree of domain specific saliency. Numerous CBIR systems have been developed semantic retrieval, has however not been performed. This paper gives a detailed account of the framework of visual features and semantic information utilized within a prototype image retrieval system, for PET neurological data. Images from patients diagnosed with different and known forms of Dementia are studied and compared to controls. Image characteristics with medical saliency are isolated in a top down manner, from the needs of the clinician - to the explicit visual content. These features are represented via Gabor wavelets and mean activity levels of specific anatomical regions. Preliminary results demonstrate that these representations are effective in reflecting image characteristics and subject diagnosis; consequently they are efficient indices within a semantic retrieval system.


Information Fusion | 2017

A fused deep learning architecture for viewpoint classification of echocardiography

Xiaohong W. Gao; Wei Li; Martin J. Loomes; Lianyi Wang

This study extends the state of the art of deep learning convolutional neural network (CNN) to the classification of video images of echocardiography, aiming at assisting clinicians in diagnosis of heart diseases. Specifically, the architecture of neural networks is established by embracing hand-crafted features within a data-driven learning framework, incorporating both spatial and temporal information sustained by the video images of the moving heart and giving rise to two strands of two-dimensional convolutional neural network (CNN). In particular, the acceleration measurement along the time direction at each point is calculated using dense optical flow technique to represent temporal motion information. Subsequently, the fusion of both networks is conducted via linear integrations of the vectors of class scores obtained from each of the two networks. As a result, this architecture maintains the best classification results for eight viewpoint categories of echo videos with 92.1% accuracy rate whereas 89.5% is achieved using only single spatial CNN network. When concerning only three primary locations, 98% of accuracy rate is realised. In addition, comparisons with a number of well-known hand-engineered approaches are also performed, including 2D KAZE, 2D KAZE with Optical Flow, 3D KAZA, Optical Flow, 2D SIFT and 3D SIFT, which delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively.


MCBR-CDS'12 Proceedings of the Third MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2012

The synergy of 3d SIFT and sparse codes for classification of viewpoints from echocardiogram videos

Yu Qian; Lianyi Wang; Chunyan Wang; Xiaohong W. Gao

Echocardiography plays an important part in diagnostic aid in cardiology. During an echocardiogram exam images or image sequences are usually taken from different locations with various directions in order to comprehend a comprehensive view of the anatomical structure of the 3D moving heart. The automatic classification of echocardiograms based on the viewpoint constitutes an essential step in a computer-aided diagnosis. The challenge remains the high noise to signal ratio of an echocardiography, leading to low resolution of echocardiograms. In this paper, a new synergy is proposed based on well-established algorithms to classify view positions of echocardiograms. Bags of Words (BoW) are coupled with linear SVMs. Sparse coding is employed to train an echocardiogram video dictionary based on a set of 3D SIFT descriptors of space-time interest points detected by a Cuboid detector. Multiple scales of max pooling features are applied to representat the echocardiogram video. The linear multiclass SVM is employed to classify echocardiogram videos into eight views. Based on the collection of 219 echocardiogram videos, the evaluation is carried out. The preliminary results exhibit 72% Average Accuracy Rate (AAR) for the classification with eight view angles and 90% with three primary view locations.


Archive | 2008

Medical imaging and informatics

Xiaohong W. Gao; Henning Müller; Martin J. Loomes; Richard Comley; Shuqian Luo

Medical imaging and informatics : , Medical imaging and informatics : , کتابخانه دیجیتال جندی شاپور اهواز

Collaboration


Dive into the Xiaohong W. Gao's collaboration.

Top Co-Authors

Avatar

D. G. Shaposhnikov

Southern Federal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Qian

Middlesex University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui Hui

Middlesex University

View shared research outputs
Top Co-Authors

Avatar

Tim D. Fryer

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge