Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maozhen Li is active.

Publication


Featured researches published by Maozhen Li.


IEEE Transactions on Neural Networks | 2006

Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays

Zidong Wang; Yurong Liu; Maozhen Li; Xiaohui Liu

In this letter, the global asymptotic stability analysis problem is considered for a class of stochastic Cohen-Grossberg neural networks with mixed time delays, which consist of both the discrete and distributed time delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, a linear matrix inequality (LMI) approach is developed to derive several sufficient conditions guaranteeing the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also pointed out that the main results comprise some existing results as special cases. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria


fuzzy systems and knowledge discovery | 2010

MRSim: A discrete event based MapReduce simulator

Suhel Hammoud; Maozhen Li; Yang Liu; Nasullah Khalid Alham; Zelong Liu

Recently MapReduce programming model is becoming popular for large scale data intensive distributed applications due to its efficiency, simplicity and ease of use. The Hadoop implementation of MapReduce is one of the most popular tools for many programmers due its ability to hide details of parallel programming from the users. However work on simulating the Hadoop environment is still in its infancy. Although there are a large number of simulating tools available to simulate distributed environments. However there are only a few simulators available which specifically targets the MapReduce environment. Based on testing we performed; the usability of these simulators is not satisfactory due to the simplified design which limits simulating jobs with variance configurations. We have designed and implemented a MapReduce simulator based on discrete event simulation called MRSim which accurately simulate the Hadoop environment. The simulator on one hand allows us to measure scalability of MapReduce based applications easily and quickly, on the other hand captures the effects of different configurations of Hadoop setup on MapReduce based applications behavior in terms of Hadoop job completion times and hardware utilization.


ACM Computing Surveys | 2012

A survey of emerging approaches to spam filtering

Godwin Caruana; Maozhen Li

From just an annoying characteristic of the electronic mail epoch, spam has evolved into an expensive resource and time-consuming problem. In this survey, we focus on emerging approaches to spam filtering built on recent developments in computing technologies. These include peer-to-peer computing, grid computing, semantic Web, and social networks. We also address a number of perspectives related to personalization and privacy in spam filtering. We conclude that, while important advancements have been made in spam filtering in recent years, high performance approaches remain to be explored due to the large scale of the problem.


Future Generation Computer Systems | 2013

HSim: A MapReduce simulator in enabling Cloud Computing

Yang Liu; Maozhen Li; Nasullah Khalid Alham; Suhel Hammoud

Abstract MapReduce is an enabling technology in support of Cloud Computing. Hadoop which is a MapReduce implementation has been widely used in developing MapReduce applications. This paper presents HSim, a MapReduce simulator which builds on top of Hadoop. HSim models a large number of parameters that can affect the behaviors of MapReduce nodes, and thus it can be used to tune the performance of a MapReduce cluster. HSim is validated with both benchmark results and user customized MapReduce applications.


IEEE Transactions on Knowledge and Data Engineering | 2008

Grid Service Discovery with Rough Sets

Maozhen Li; Bin Yu; Omer Farooq Rana; Zidong Wang

The computational grid is rapidly evolving into a service-oriented computing infrastructure that facilitates resource sharing and large-scale problem solving over the Internet. Service discovery becomes an issue of vital importance in utilizing grid facilities. This paper presents ROSSE, a Rough sets-based search engine for grid service discovery. Building on the Rough sets theory, ROSSE is novel in its capability to deal with the uncertainty of properties when matching services. In this way, ROSSE can discover the services that are most relevant to a service query from a functional point of view. Since functionally matched services may have distinct nonfunctional properties related to the quality of service (QoS), ROSSE introduces a QoS model to further filter matched services with their QoS values to maximize user satisfaction in service discovery. ROSSE is evaluated from the aspects of accuracy and efficiency in discovery of computing services.


fuzzy systems and knowledge discovery | 2011

A MapReduce based parallel SVM for large scale spam filtering

Godwin Caruana; Maozhen Li; Man Qi

Spam continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) based techniques have been proposed for spam classification. However, SVM training is a computationally intensive process. This paper presents a parallel SVM algorithm for scalable spam filtering. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the distributed SVM reduces the training time significantly. Ontology based concepts are also employed to minimize the impact of accuracy degradation when distributing the training data amongst the SVM classifiers.


Computers & Mathematics With Applications | 2011

A MapReduce-based distributed SVM algorithm for automatic image annotation

Nasullah Khalid Alham; Maozhen Li; Yang Liu; Suhel Hammoud

Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them Support Vector Machines (SVMs) have been used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. This paper presents MRSMO, a MapReduce based distributed SVM algorithm for automatic image annotation. The performance of the MRSMO algorithm is evaluated in an experimental environment. By partitioning the training dataset into smaller subsets and optimizing the partitioned subsets across a cluster of computers, the MRSMO algorithm reduces the training time significantly while maintaining a high level of accuracy in both binary and multiclass classifications.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2011

Probability-Dependent Gain-Scheduled Filtering for Stochastic Systems With Missing Measurements

Guoliang Wei; Zidong Wang; Bo Shen; Maozhen Li

This brief addresses the gain-scheduled filtering problem for a class of discrete-time systems with missing measurements, nonlinear disturbances, and external stochastic noise. The missing-measurement phenomenon is assumed to occur in a random way, and the missing probability is time-varying with securable upper and lower bounds that can be measured in real time. The multiplicative noise is a state-dependent scalar Gaussian white-noise sequence with known variance. The addressed gain-scheduled filtering problem is concerned with the design of a filter such that, for the admissible random missing measurements, nonlinear parameters, and external noise disturbances, the error dynamics is exponentially mean-square stable. The desired filter is equipped with time-varying gains based primarily on the time-varying missing probability and is therefore less conservative than the traditional filter with fixed gains. It is shown that the filter parameters can be derived in terms of the measurable probability via the semidefinite program method.


Future Generation Computer Systems | 2004

SGrid: a service-oriented model for the Semantic grid

Maozhen Li; P. van Santen; David W. Walker; Omer Farooq Rana; Mark Baker

This paper presents SGrid, a service-oriented model for the Semantic Grid. Each Grid service in SGrid is a Web service with certain domain knowledge. A Web services oriented wrapper generator has been implemented to automatically wrap legacy codes as Grid services exposed as Web services. Each wrapped Grid service is supplemented with domain ontology and registered with a Semantic Grid Service Ontology Repository using a Semantic Services Register. Using the wrapper generator, a finite element based computational fluid dynamics (CFDs) code has been wrapped as a Grid service, which can be published, discovered and reused in SGrid.


IEEE Transactions on Parallel and Distributed Systems | 2016

Hadoop Performance Modeling for Job Estimation and Resource Provisioning

Mukhtaj Khan; Yong Jin; Maozhen Li; Yang Xiang; Changjun Jiang

MapReduce has become a major computing model for data intensive applications. Hadoop, an open source implementation of MapReduce, has been adopted by an increasingly growing user community. Cloud computing service providers such as Amazon EC2 Cloud offer the opportunities for Hadoop users to lease a certain amount of resources and pay for their use. However, a key challenge is that cloud service providers do not have a resource provisioning mechanism to satisfy user jobs with deadline requirements. Currently, it is solely the users responsibility to estimate the required amount of resources for running a job in the cloud. This paper presents a Hadoop job performance model that accurately estimates job completion time and further provisions the required amount of resources for a job to be completed within a deadline. The proposed model builds on historical job execution records and employs Locally Weighted Linear Regression (LWLR) technique to estimate the execution time of a job. Furthermore, it employs Lagrange Multipliers technique for resource provisioning to satisfy jobs with deadline requirements. The proposed model is initially evaluated on an in-house Hadoop cluster and subsequently evaluated in the Amazon EC2 Cloud. Experimental results show that the accuracy of the proposed model in job execution estimation is in the range of 94.97 and 95.51 percent, and jobs are completed within the required deadlines following on the resource provisioning scheme of the proposed model.

Collaboration


Dive into the Maozhen Li's collaboration.

Top Co-Authors

Avatar

Man Qi

Canterbury Christ Church University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sadaqat Jan

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Suhel Hammoud

Brunel University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mahesh Ponraj

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Mukhtaj Khan

Brunel University London

View shared research outputs
Researchain Logo
Decentralizing Knowledge