Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhonghang Xia is active.

Publication


Featured researches published by Zhonghang Xia.


international performance computing and communications conference | 2009

Achieving high performance web applications by service and database replications at edge servers

Wei Hao; Jicheng Fu; I-Ling Yen; Zhonghang Xia

Edge server replication is an effective solution to achieve high performance in dynamic web applications, such as web services. Many web services involve frequent accesses to large-scale backend databases. Current database replication techniques are not directly applicable to edge server architectures. There is no algorithm to dynamically and automatically select the tables for replication. Also, most of the solutions do not consider the potentially limited disk sizes at the edge servers and the needs for them to serve many application sites. In this paper, we present a novel weighted table graph based database replication approach for edge servers to address these problems. Every step in our approach is based on quantitative computation, so it can generate an accurate result. Experimental studies show that our database replication approach significantly improves the performance of web systems in terms of client response latency, web application server offloading, and network bandwidth saving.


Neural Networks | 2011

Design of a multiple kernel learning algorithm for LS-SVM by convex programming

Ling Jian; Zhonghang Xia; Xijun Liang; Chuanhou Gao

As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed.


acm southeast regional conference | 2006

Support vector machines for collaborative filtering

Zhonghang Xia; Yulin Dong; Guangming Xing

Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.


IEEE Transactions on Automation Science and Engineering | 2012

Constructing Multiple Kernel Learning Framework for Blast Furnace Automation

Ling Jian; Chuanhou Gao; Zhonghang Xia

This paper constructs the framework of the reproducing kernel Hilbert space for multiple kernel learning, which provides clear insights into the reason that multiple kernel support vector machines (SVM) outperform single kernel SVM. These results can serve as a fundamental guide to account for the superiority of multiple kernel to single kernel learning. Subsequently, the constructed multiple kernel learning algorithms are applied to model a nonlinear blast furnace system only based on its input-output signals. The experimental results not only confirm the superiority of multiple kernel learning algorithms, but also indicate that multiple kernel SVM is a kind of highly competitive data-driven modeling method for the blast furnace system and can provide reliable indication for blast furnace operators to take control actions.


IEEE Transactions on Parallel and Distributed Systems | 2005

A distributed admission control model for QoS assurance in large-scale media delivery systems

Zhonghang Xia; Wei Hao; I-Ling Yen; Peng Li

Conventional admission control models incur some performance penalty. First, admission control computation can overload a server that is already heavily loaded. Also, in large-scale media systems with geographically distributed server clusters, performing admission control on each cluster can result in long response latency, if the client request is denied at one site and has to be forwarded to another site. Furthermore, in prefix caching, initial frames cached at the proxy are delivered to the client before the admission decisions are made. If the media server is heavily loaded and, finally, has to deny the client request, forwarding a large number of initial frames is a waste of critical network resources. In this paper, a novel distributed admission control model is presented. We make use of proxy servers to perform the admission control tasks. Each proxy hosts an agent to coordinate the effort. Agents reserve media servers disk bandwidth and make admission decisions autonomously based on the allocated disk bandwidth. We develop an effective game theoretic framework to achieve fairness in the bandwidth allocation among the agents. To improve the overall bandwidth utilization, we also consider an aggressive admission control policy where each agent may admit more requests than its allocated bandwidth allows. The distributed admission control approach provides the solution to the stated problems incurred in conventional admission control models. Experimental studies show that our algorithms significantly reduce the response latency and the media server load.


Electronic Commerce Research | 2007

Preference update for e-commerce applications: Model, language, and processing

Peng Li; Manghui Tu; I-Ling Yen; Zhonghang Xia

It is likely that customers issue requests based on out-of-date information in e-commerce application systems. Hence, the transaction failure rates would increase greatly. In this paper, we present a preference update model to address this problem. A preference update is an extended SQL update statement where a user can request the desired number of target data items by specifying multiple preferences. Moreover, the preference update allows easy extraction of criteria from a set of concurrent requests and, hence, optimal decisions for the data assignments can be made. We propose a group evaluation strategy for preference update processing in a multidatabase environment. The experimental results show that the group evaluation can effectively increase the customer satisfaction level with acceptable cost.


Journal of Proteome Research | 2013

A Novel Algorithm for Validating Peptide Identification from a Shotgun Proteomics Search Engine

Ling Jian; Xinnan Niu; Zhonghang Xia; Parimal Samir; Sumanasekera C; Mu Z; Jennifer L. Jennings; Hoek Kl; Allos T; Howard Lm; Kathryn M. Edwards; P A Weil; Andrew J. Link

Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.


acm southeast regional conference | 2007

X2R: a system for managing XML documents and key constraints using RDBMS

Guangming Xing; Zhonghang Xia; Douglas Ayers

We describe X2R, an XML document management system which supports efficient storage, retrieval and key constraints for XML documents. The system is based on a mapping algorithm that translates a DTD to a relational schema. Based on the mapping, node groups are range indexed and shredded into the database. It has been shown that queries are executed several times faster than using the methods in literature, and space usage is significantly reduced. We have also added key support to the system by propagating key constraints for XML documents to keys in a relational schema. The system was designed for efficient access of the vast amount of environmental data, which are widely available in XML format.


international conference on image processing | 2008

Detecting image points of diverse imbalance

Qi Li; Zhonghang Xia

Imbalance oriented selection scheme was recently proposed to select good candidates of interest points [1]. In this paper, we propose a method to quantify the local diversity of imbalance of an image point, which provides us a new interest strength assignment scheme. We test the proposed approach by repeatability evaluation and stereo matching and obtain promising results.


acm symposium on applied computing | 2007

Building automatic mapping between XML documents using approximate tree matching

Guangming Xing; Zhonghang Xia; Andrew Ernest

The eXtensible Markup Language (XML) is becoming the standard format for data exchange on the Internet, providing interoperability among Web applications. It is important to provide efficient algorithms and tools to manipulate XML documents that are ubiquitous on the Web. In this paper, we present a novel system for automating the transformation of XML documents based on structural mapping with the restriction that the leaf text information are exactly the same in the source and target documents. Firstly, tree edit distance algorithm is used to find the mapping between a pair of source and target documents. With the introduction of tree partition, the efficiency of the tree matching algorithm has been improved significantly. Secondly, template rules for transformation are inferred from the mapping using generalization. Thirdly, a template matching component is used to process new documents. Experimental studies have shown that our methods are very promising and can be widely used for Web document cleaning, information filtering, and other applications.

Collaboration


Dive into the Zhonghang Xia's collaboration.

Top Co-Authors

Avatar

Xijun Liang

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I-Ling Yen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ling Jian

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Guangming Xing

Western Kentucky University

View shared research outputs
Top Co-Authors

Avatar

Manghui Tu

Purdue University Calumet

View shared research outputs
Top Co-Authors

Avatar

Peng Li

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Qi Li

Western Kentucky University

View shared research outputs
Top Co-Authors

Avatar

Wei Hao

Northern Kentucky University

View shared research outputs
Researchain Logo
Decentralizing Knowledge