Liangxiu Han
Manchester Metropolitan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liangxiu Han.
advanced data mining and applications | 2009
Liangxiu Han; Jano van Hemert; Richard Baldock; Malcolm P. Atkinson
It is of high biomedical interest to identify gene interactions and networks that are associated with developmental and physiological functions in the mouse embryo. There are now large datasets with both spatial and ontological annotation of the spatio-temporal patterns of geneexpression that provide a powerful resource to discover potential mechanisms of embryo organisation. Ontological annotation of gene expression consists of labelling images with terms from the anatomy ontology for mouse development. Current annotation is made manually by domain experts. It is both time consuming and costly. In this paper, we present a new data mining framework to automatically annotate gene expression patterns in images with anatomic terms. This framework integrates the images stored in file systems with ontology terms stored in databases, and combines pattern recognition with image processing techniques to identify the anatomical components that exhibit gene expression patterns in images. The experimental result shows the framework works well.
high performance distributed computing | 2010
Chee Sun Liew; Malcolm P. Atkinson; Jano van Hemert; Liangxiu Han
Modern scientific collaborations have opened up the opportunity of solving complex problems that involve multi-disciplinary expertise and large-scale computational experiments. These experiments usually involve large amounts of data that are located in distributed data repositories running various software systems, and managed by different organisations. A common strategy to make the experiments more manageable is executing the processing steps as a workflow. In this paper, we look into the implementation of fine-grained data-flow between computational elements in a scientific workflow as streams. We model the distributed computation as a directed acyclic graph where the nodes represent the processing elements that incrementally implement specific subtasks. The processing elements are connected in a pipelined streaming manner, which allows task executions to overlap. We further optimise the execution by splitting pipelines across processes and by introducing extra parallel streams. We identify performance metrics and design a measurement tool to evaluate each enactment. We conducted experiments to evaluate our optimisation strategies with a real world problem in the Life Sciences---EURExpress-II. The paper presents our distributed data-handling model, the optimisation and instrumentation strategies and the evaluation experiments. We demonstrate linear speed up and argue that this use of data-streaming to enable both overlapped pipeline and parallelised enactment is a generally applicable optimisation strategy.
Computerized Medical Imaging and Graphics | 2013
Muhammad Salman Haleem; Liangxiu Han; Jano Van Hemert; Baihua Li
Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis.
Journal of Parallel and Distributed Computing | 2010
Liangxiu Han; Stephen Potter; George Beckett; Gavin J. Pringle; Stephen Welch; Sung-Han Koo; Gerhard Wickler; Asif Usmani; Jose L. Torero; Austin Tate
The FireGrid project aims to harness the potential of advanced forms of computation to support the response to large-scale emergencies (with an initial focus on the response to fires in the built environment). Computational models of physical phenomena are developed, and then deployed and computed on High Performance Computing resources to infer incident conditions by assimilating live sensor data from an emergency in real time-or, in the case of predictive models, faster-than-real time. The results of these models are then interpreted by a knowledge-based reasoning scheme to provide decision support information in appropriate terms for the emergency responder. These models are accessed over a Grid from an agent-based system, of which the human responders form an integral part. This paper proposes a novel FireGrid architecture, and describes the rationale behind this architecture and the research results of its application to a large-scale fire experiment.
Future Generation Computer Systems | 2008
Liangxiu Han; Dave Berry
One of the open issues in grid computing is efficient resource discovery. In this paper, we propose a novel semantic-supported and agent-based decentralized grid resource discovery mechanism. Without overhead of negotiation, the algorithm allows individual resource agents to semantically interact with neighbour agents based on local knowledge and to dynamically form a resource service chain to complete a task. The algorithm ensures the resource agents ability to cooperate and coordinate on neighbour knowledge requisition for flexible problem solving. The developed algorithm is evaluated by investigating the relationship between the success probability of resource discovery and semantic similarity under different factors. The experiments show the algorithm could flexibly and dynamically discover resources and therefore provide a valuable addition to the field.
Active and Programmable Networks | 2009
Ivan Dedinski; Hermann de Meer; Liangxiu Han; Laurent Mathy; Dimitrios P. Pezaros; Joseph S. Sventek; X. Y. Zhan
P2P applications appear to emerge as ultimate killer applications due to their ability to construct highly dynamic overlay topologies with rapidly-varying and unpredictable traffic dynamics, which can constitute a serious challenge even for significantly over-provisioned IP networks. As a result, ISPs are facing new, severe network management problems that are not guaranteed to be addressed by statically deployed network engineering mechanisms. As a first step to a more complete solution to these problems, this paper proposes a P2P measurement, identification and optimisation architecture, designed to cope with the dynamicity and unpredictability of existing, well-known and future, unknown P2P systems. The purpose of this architecture is to provide to the ISPs an effective and scalable approach to control and optimise the traffic produced by P2P applications in their networks. This can be achieved through a combination of different application and network-level programmable techniques, leading to a cross-layer identification and optimisation process. These techniques can be applied using Active Networking platforms, which are able to quickly and easily deploy architectural components on demand. This flexibility of the optimisation architecture is essential to address the rapid development of new P2P protocols and the variation of known protocols.
Proceedings of the second international workshop on Data-aware distributed computing | 2009
Malcolm P. Atkinson; Jano van Hemert; Liangxiu Han; Ally Hume; Chee Sun Liew
This paper presents the rationale for a new architecture to support a significant increase in the scale of data integration and data mining. It proposes the composition into one framework of (1) data mining and (2) data access and integration. We name the combined activity DMI. It supports enactment of DMI processes across heterogeneous and distributed data resources and data mining services. It posits that a useful division can be made between the facilities established to support the definition of DMI processes and the computational infrastructure provided to enact DMI processes. Communication between those two divisions is restricted to requests submitted to gateway services in a canonical DMI language. Larger-scale processes are enabled by incremental refinement of DMI-process definitions often by recomposition of lower-level definitions. Autonomous evolution of data resources and services is supported by types and descriptions which will support detection of inconsistencies and semi-automatic insertion of adaptations. These architectural ideas are being evaluated in a feasibility study that involves an application scenario and representatives of the community.
Computational and Mathematical Methods in Medicine | 2015
Shuihua Wang; Mengmeng Chen; Yang Li; Yudong Zhang; Liangxiu Han; Jane Y. Wu; Sidan Du
Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimers disease, Parkinsons diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines.
Fire Safety Science | 2008
Rochan Upadhyay; Gavin J. Pringle; George Beckett; Stephen Potter; Liangxiu Han; Stephen Welch; Asif Usmani; Jose L. Torero
FireGrid is a modern concept that aims to leverage a number of modern technologies to aid fire emergency response. In this paper we provide a brief introduction to the FireGrid project. A number of different technologies such as wireless sensor networks, grid-enabled High Performance Computing (HPC) implementation of fire models, and artificial intelligence tools need to be integrated to build up a modern fire emergency response system. We propose a system architecture that provides the framework for integration of the various technologies. We describe the components of the generic FireGrid system architecture in detail. Finally we present a small-scale demonstration experiment which has been completed to highlight the concept and application of the FireGrid system to an actual fire. Although our proposed system architecture provides a versatile framework for integration, a number of new and interesting research problems need to be solved before actual deployment of the system. We outline some of the challenges involved which require significant interdisciplinary collaborations.
parallel computing | 2011
Liangxiu Han; Chee Sun Liew; Jano van Hemert; Malcolm P. Atkinson
To facilitate data mining and integration (DMI) processes in a generic way, we investigate a parallel pipeline streaming model. We model a DMI task as a streaming data-flow graph: a directed acyclic graph (DAG) of Processing Elements (PEs). The composition mechanism links PEs via data streams, which may be in memory, buffered via disks or inter-computer data-flows. This makes it possible to build arbitrary DAGs with pipelining and both data and task parallelisms, which provide room for performance enhancement. We have applied this approach to a real DMI case in the life sciences and implemented a prototype. To demonstrate feasibility of the modelled DMI task and assess the efficiency of the prototype, we have also built a performance evaluation model. The experimental evaluation results show that a linear speedup has been achieved with the increase of the number of distributed computing nodes in this case study.