Chunfeng Yuan
Nanjing University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chunfeng Yuan.
Journal of Parallel and Distributed Computing | 2014
Rong Gu; Xiaoliang Yang; Jinshuang Yan; Yuanhao Sun; Bing Wang; Chunfeng Yuan; Yihua Huang
As a widely-used parallel computing framework for big data processing today, the Hadoop MapReduce framework puts more emphasis on high-throughput of data than on low-latency of job execution. However, today more and more big data applications developed with MapReduce require quick response time. As a result, improving the performance of MapReduce jobs, especially for short jobs, is of great significance in practice and has attracted more and more attentions from both academia and industry. A lot of efforts have been made to improve the performance of Hadoop from job scheduling or job parameter optimization level. In this paper, we explore an approach to improve the performance of the Hadoop MapReduce framework by optimizing the job and task execution mechanism. First of all, by analyzing the job and task execution mechanism in MapReduce framework we reveal two critical limitations to job execution performance. Then we propose two major optimizations to the MapReduce job and task execution mechanisms: first, we optimize the setup and cleanup tasks of a MapReduce job to reduce the time cost during the initialization and termination stages of the job; second, instead of adopting the loose heartbeat-based communication mechanism to transmit all messages between the JobTracker and TaskTrackers, we introduce an instant messaging communication mechanism for accelerating performance-sensitive task scheduling and execution. Finally, we implement SHadoop, an optimized and fully compatible version of Hadoop that aims at shortening the execution time cost of MapReduce jobs, especially for short jobs. Experimental results show that compared to the standard Hadoop, SHadoop can achieve stable performance improvement by around 25% on average for comprehensive benchmarks without losing scalability and speedup. Our optimization work has passed a production-level test in Intel and has been integrated into the Intel Distributed Hadoop (IDH). To the best of our knowledge, this work is the first effort that explores on optimizing the execution mechanism inside map/reduce tasks of a job. The advantage is that it can complement job scheduling optimizations to further improve the job execution performance.
international parallel and distributed processing symposium | 2014
Hongjian Qiu; Rong Gu; Chunfeng Yuan; Yihua Huang
The frequent itemset mining (FIM) is one of the most important techniques to extract knowledge from data in many real-world applications. The Apriori algorithm is the widely-used algorithm for mining frequent itemsets from a transactional dataset. However, the FIM process is both data-intensive and computing-intensive. On one side, large scale data sets are usually adopted in data mining nowadays, on the other side, in order to generate valid information, the algorithm needs to scan the datasets iteratively for many times. These make the FIM algorithm very time-consuming over big data. The parallel and distributed computing is effective and mostly-used strategy for speeding up large scale dataset algorithms. However, the existing parallel Apriori algorithms implemented with the MapReduce model are not efficient enough for iterative computation. In this paper, we proposed YAFIM (Yet Another Frequent Itemset Mining), a parallel Apriori algorithm based on the Spark RDD framework -- a specially-designed in-memory parallel computing model to support iterative algorithms and interactive data mining. Experimental results show that, compared with the algorithms implemented with MapReduce, YAFIM achieved 18× speedup in average for various benchmarks. Especially, we apply YAFIM in a real-world medical application to explore the relationships in medicine. It outperforms the MapReduce method around 25 times.
international conference on cloud and green computing | 2012
Jinshuang Yan; Xiaoliang Yang; Rong Gu; Chunfeng Yuan; Yihua Huang
Hadoop MapReduce is a widely used parallel computing framework for solving data-intensive problems. To be able to process large-scale datasets, the fundamental design of the standard Hadoop places more emphasis on high-throughput of data than on job execution performance. This causes performance limitation when we use Hadoop MapReduce to execute short jobs that requires quick responses. In order to speed up the execution of short jobs, this paper proposes optimization methods to improve the execution performance of MapReduce jobs. We made three major optimizations: first, we reduce the time cost during the initialization and termination stages of a job by optimizing its setup and cleanup tasks, second, we replace the pull-model task assignment mechanism with a push-model, third, we replace the heartbeat-based communication mechanism with an instant message communication mechanism for event notifications between the Job Tracker and Task Trackers. Experimental results show that the job execution performance of our improved version of Hadoop is about 23% faster on average than the standard Hadoop for our test application.
international parallel and distributed processing symposium | 2014
Lei Jin; Zhaokang Wang; Rong Gu; Chunfeng Yuan; Yihua Huang
As a new area of machine learning research, the deep learning algorithm has attracted a lot of attention from the research community. It may bring human beings to a higher cognitive level of data. Its unsupervised pre-training step allows us to find high-dimensional representations or abstract features which work much better than the principal component analysis (PCA) method. However, it will face problems when being applied to deal with large scale data due to its intensive computation from many levels of training process against large scale data. The sequential deep learning algorithms usually can not finish the computation in an acceptable time. In this paper, we propose a many-core algorithm which is based on a parallel method and is used in the Intel Xeon Phi many-core systems to speed up the unsupervised training process of Sparse Autoencoder and Restricted Boltzmann Machine (RBM). Using the sequential training algorithm as a baseline to compare, we adopted several optimization methods to parallelize the algorithm. The experimental results show that our fully-optimized algorithm gains more than 300-fold speedup on parallelized Sparse Autoencoder compared with the original sequential algorithm on the Intel Xeon Phi coprocessor. Also, we ran the fully-optimized code on both the Intel Xeon Phi coprocessor and an expensive Intel Xeon CPU. Our method on the Intel Xeon Phi coprocessor is 7 to 10 times faster than the Intel Xeon CPU for this application. In addition to this, we compared our fully-optimized code on the Intel Xeon Phi with a Matlab code running on single Intel Xeon CPU. Our method on the Intel Xeon Phi runs 16 times faster than the Matlab implementation. The result also suggests that the Intel Xeon Phi can offer an efficient but more general-purposed way to parallelize the deep learning algorithm compared to GPU. It also achieves faster speed with better parallelism than the Intel Xeon CPU.
international parallel and distributed processing symposium | 2015
Rong Gu; Shanyong Wang; Fangfang Wang; Chunfeng Yuan; Yihua Huang
In the era of big data, the volume of semantic data grows rapidly. The large scale semantic data contains a lot of significant but often implicit information that needs to be derived by reasoning. The semantic data reasoning is a challenging process. On one hand, the traditional single-node reasoning systems can hardly cope with such large amount of data due to the resource limitations. On the other hand, the existing large scale reasoning systems are not very efficient and scalable due to the complexity of reasoning process. In this paper, we propose Cichlid, an efficient distributed reasoning engine for the widely-used RDFS and OWL Horst rule sets. Cichlid is built on top of Spark. It implements parallel reasoning algorithms with the Spark RDD programming model. Further, we proposed the optimized parallel RDFS reasoning algorithm from three aspects, including data partition model, the execution order of reasoning rules and removing of duplicated data. Then, for the parallel OWL reasoning process, we optimized the most time-consuming parts, including large-scale data join, the transitive closure computation and the equivalent relation computation. In addition to above optimizations at the reasoning algorithm level, we also optimized the inner Spark execution mechanism by proposing an off-heap memory storage mechanism for RDD. This system-level optimization patch has been accepted and integrated into Apache Spark 1.0. The experimental results show that Cichlid is around 10 times faster on average than the state-of-the-art distributed reasoning systems for both large scale synthetic and real-world benchmarks. The proposed reasoning algorithms and engine also achieve excellent scalability and fault tolerance.
international symposium on parallel architectures, algorithms and programming | 2011
Xiaoliang Yang; Yulong Liu; Chunfeng Yuan; Yihua Huang
Sequence alignment is of great importance in biology research. BLAST is a sequence alignment tool used extensively by researchers. However the continuously increasing amount of sequence data to be processed presents many challenges to it. This paper gives a simple and effective approach to parallelizing BLAST using the MapReduce technique. The MapReduce-BLAST shows very good performance and scales nearly linearly to the database size and query length. This results from both the power of MapReduce and the inherent parallel characteristics of the BLAST algorithm. Sequence alignment algorithms based on techniques similar with BLASTs seed-and-extend approach are very suitable for being parallelized with MapReduce.
Knowledge Based Systems | 2015
Shengsheng Shi; Chengfei Liu; Yi Shen; Chunfeng Yuan; Yihua Huang
Abstract A Web database typically responds to a query with a Web page, which encodes the query results into semi-structured data objects using HTML tags. We call such data objects Web data records or data records. Mining Web data records is very important for many applications, e.g., meta search, comparative shopping, etc. This paper proposes a new effective approach called AutoRM, which mines data records from single Web page automatically. AutoRM involves three major steps: (1) constructing the DOM tree of the given Web page; (2) mining all sets of adjacent similar C-Records (Candidate data Records) from the constructed DOM tree; (3) mining actual data records from C-Records. In many Web pages, similar data records are distributed in bigger and adjacent similar objects. Existing approaches typically identify such objects as data records. Conversely, AutoRM views such objects as C-Records, and mines actual data records from them. One key issue for mining similar data records is the boundary detection of each data record. Existing approaches typically make some brittle assumptions for handling this issue. By making more robust assumptions, AutoRM tends to detect data record boundaries more accurately. Experimental results show that AutoRM is highly effective, and outperforms state-of-the-art approaches.
international symposium on parallel architectures, algorithms and programming | 2011
Tao Xiao; Chunfeng Yuan; Yihua Huang
Many algorithms have been proposed in past decades to efficiently mine frequent sets in transaction database, including the SON Algorithm proposed by Savasere, Omiecinski and Navathe. This paper introduces the SON algorithm, explains why SON is very suitable to be parallelized, and illustrates how to adapt SON to the MapReduce paradigm. Then we propose a parallelized SON algorithm, PSON, and implement it in Hadoop. Our study suggests that PSON can mine frequent item sets from a very large database with good performance. The experimental results show that when performing frequent sets mining, the time cost will increase almost linearly with the size of the datasets and decrease with approximately linear trend with the number of cluster nodes. Consequently, we conclude that PSON works well for solving the frequent set mining problem from massive datasets with a good performance in both scalability and speed-up.
web information systems engineering | 2004
Zhiyu Liu; Guihai Chen; Chunfeng Yuan; Sanglu Lu; Chengzhong Xu
A fundamental problem that confronts structured peer-to-peer system that use DHT technologies to map data onto nodes is the performance of the network under the circumstance that a large percentage of nodes join and fail frequently and simultaneously. A careful examination of some typical peer-to-peer networks will contribute a lot to choosing and using certain kind of topology in special applications. This paper analyzes the performance of Chord [7] and Koorde [2], and find out the crash point of each network through the simulation experiment.
advanced data mining and applications | 2014
Wei Ge; Yihua Huang; Di Zhao; Shengmei Luo; Chunfeng Yuan; Wenhui Zhou; Yun Tang; Juan Zhou
We are now entering the era of big data. HBase comes out to organize data as key-value pairs and support fast queries on rowkeys, but queries on non-rowkey column are a blind spot of HBase. It is the main topic of this paper to provide high-performance query capability on non-rowkey column. An effective secondary index model is proposed, and the prototype system CinHBa is implemented. Furthermore, a novel caching policy, Hotscore Algorithm, is introduced in CinHBa to cache hottest index data into memory to improve query performance. Experiment evaluation shows that query response time of CinHBa is far less than native HBase without secondary index on 10M records. Besides that, CinHBa has good data scalability.