Rong Gu
Nanjing University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rong Gu.
Journal of Parallel and Distributed Computing | 2014
Rong Gu; Xiaoliang Yang; Jinshuang Yan; Yuanhao Sun; Bing Wang; Chunfeng Yuan; Yihua Huang
As a widely-used parallel computing framework for big data processing today, the Hadoop MapReduce framework puts more emphasis on high-throughput of data than on low-latency of job execution. However, today more and more big data applications developed with MapReduce require quick response time. As a result, improving the performance of MapReduce jobs, especially for short jobs, is of great significance in practice and has attracted more and more attentions from both academia and industry. A lot of efforts have been made to improve the performance of Hadoop from job scheduling or job parameter optimization level. In this paper, we explore an approach to improve the performance of the Hadoop MapReduce framework by optimizing the job and task execution mechanism. First of all, by analyzing the job and task execution mechanism in MapReduce framework we reveal two critical limitations to job execution performance. Then we propose two major optimizations to the MapReduce job and task execution mechanisms: first, we optimize the setup and cleanup tasks of a MapReduce job to reduce the time cost during the initialization and termination stages of the job; second, instead of adopting the loose heartbeat-based communication mechanism to transmit all messages between the JobTracker and TaskTrackers, we introduce an instant messaging communication mechanism for accelerating performance-sensitive task scheduling and execution. Finally, we implement SHadoop, an optimized and fully compatible version of Hadoop that aims at shortening the execution time cost of MapReduce jobs, especially for short jobs. Experimental results show that compared to the standard Hadoop, SHadoop can achieve stable performance improvement by around 25% on average for comprehensive benchmarks without losing scalability and speedup. Our optimization work has passed a production-level test in Intel and has been integrated into the Intel Distributed Hadoop (IDH). To the best of our knowledge, this work is the first effort that explores on optimizing the execution mechanism inside map/reduce tasks of a job. The advantage is that it can complement job scheduling optimizations to further improve the job execution performance.
international parallel and distributed processing symposium | 2014
Hongjian Qiu; Rong Gu; Chunfeng Yuan; Yihua Huang
The frequent itemset mining (FIM) is one of the most important techniques to extract knowledge from data in many real-world applications. The Apriori algorithm is the widely-used algorithm for mining frequent itemsets from a transactional dataset. However, the FIM process is both data-intensive and computing-intensive. On one side, large scale data sets are usually adopted in data mining nowadays, on the other side, in order to generate valid information, the algorithm needs to scan the datasets iteratively for many times. These make the FIM algorithm very time-consuming over big data. The parallel and distributed computing is effective and mostly-used strategy for speeding up large scale dataset algorithms. However, the existing parallel Apriori algorithms implemented with the MapReduce model are not efficient enough for iterative computation. In this paper, we proposed YAFIM (Yet Another Frequent Itemset Mining), a parallel Apriori algorithm based on the Spark RDD framework -- a specially-designed in-memory parallel computing model to support iterative algorithms and interactive data mining. Experimental results show that, compared with the algorithms implemented with MapReduce, YAFIM achieved 18× speedup in average for various benchmarks. Especially, we apply YAFIM in a real-world medical application to explore the relationships in medicine. It outperforms the MapReduce method around 25 times.
international conference on cloud and green computing | 2012
Jinshuang Yan; Xiaoliang Yang; Rong Gu; Chunfeng Yuan; Yihua Huang
Hadoop MapReduce is a widely used parallel computing framework for solving data-intensive problems. To be able to process large-scale datasets, the fundamental design of the standard Hadoop places more emphasis on high-throughput of data than on job execution performance. This causes performance limitation when we use Hadoop MapReduce to execute short jobs that requires quick responses. In order to speed up the execution of short jobs, this paper proposes optimization methods to improve the execution performance of MapReduce jobs. We made three major optimizations: first, we reduce the time cost during the initialization and termination stages of a job by optimizing its setup and cleanup tasks, second, we replace the pull-model task assignment mechanism with a push-model, third, we replace the heartbeat-based communication mechanism with an instant message communication mechanism for event notifications between the Job Tracker and Task Trackers. Experimental results show that the job execution performance of our improved version of Hadoop is about 23% faster on average than the standard Hadoop for our test application.
international conference on big data | 2013
Rong Gu; Furao Shen; Yihua Huang
Artificial neural networks (ANNs) have been proved to be successfully used in a variety of pattern recognition and data mining applications. However, training ANNs on large scale datasets are both data-intensive and computation-intensive. Therefore, large scale ANNs are used with reservation for their time-consuming training to get high precision. In this paper, we present cNeural, a customized parallel computing platform to accelerate training large scale neural networks with the backpropagation algorithm. Unlike many existing parallel neural network training systems working on thousands of training samples, cNeural is designed for fast training large scale datasets with millions of training samples. To achieve this goal, firstly, cNeural adopts HBase for large scale training dataset storage and parallel loading. Secondly, it provides a parallel in-memory computing framework for fast iterative training. Third, we choose a compact, event-driven messaging communication model instead of the heartbeat polling model for instant messaging delivery. Experimental results show that the overhead time cost by data loading and messaging communication is very low in cNeural and cNeural is around 50 times faster than the solution based on Hadoop MapReduce. It also achieves nearly linear scalability and excellent load balancing.
international parallel and distributed processing symposium | 2014
Lei Jin; Zhaokang Wang; Rong Gu; Chunfeng Yuan; Yihua Huang
As a new area of machine learning research, the deep learning algorithm has attracted a lot of attention from the research community. It may bring human beings to a higher cognitive level of data. Its unsupervised pre-training step allows us to find high-dimensional representations or abstract features which work much better than the principal component analysis (PCA) method. However, it will face problems when being applied to deal with large scale data due to its intensive computation from many levels of training process against large scale data. The sequential deep learning algorithms usually can not finish the computation in an acceptable time. In this paper, we propose a many-core algorithm which is based on a parallel method and is used in the Intel Xeon Phi many-core systems to speed up the unsupervised training process of Sparse Autoencoder and Restricted Boltzmann Machine (RBM). Using the sequential training algorithm as a baseline to compare, we adopted several optimization methods to parallelize the algorithm. The experimental results show that our fully-optimized algorithm gains more than 300-fold speedup on parallelized Sparse Autoencoder compared with the original sequential algorithm on the Intel Xeon Phi coprocessor. Also, we ran the fully-optimized code on both the Intel Xeon Phi coprocessor and an expensive Intel Xeon CPU. Our method on the Intel Xeon Phi coprocessor is 7 to 10 times faster than the Intel Xeon CPU for this application. In addition to this, we compared our fully-optimized code on the Intel Xeon Phi with a Matlab code running on single Intel Xeon CPU. Our method on the Intel Xeon Phi runs 16 times faster than the Matlab implementation. The result also suggests that the Intel Xeon Phi can offer an efficient but more general-purposed way to parallelize the deep learning algorithm compared to GPU. It also achieves faster speed with better parallelism than the Intel Xeon CPU.
international parallel and distributed processing symposium | 2015
Rong Gu; Shanyong Wang; Fangfang Wang; Chunfeng Yuan; Yihua Huang
In the era of big data, the volume of semantic data grows rapidly. The large scale semantic data contains a lot of significant but often implicit information that needs to be derived by reasoning. The semantic data reasoning is a challenging process. On one hand, the traditional single-node reasoning systems can hardly cope with such large amount of data due to the resource limitations. On the other hand, the existing large scale reasoning systems are not very efficient and scalable due to the complexity of reasoning process. In this paper, we propose Cichlid, an efficient distributed reasoning engine for the widely-used RDFS and OWL Horst rule sets. Cichlid is built on top of Spark. It implements parallel reasoning algorithms with the Spark RDD programming model. Further, we proposed the optimized parallel RDFS reasoning algorithm from three aspects, including data partition model, the execution order of reasoning rules and removing of duplicated data. Then, for the parallel OWL reasoning process, we optimized the most time-consuming parts, including large-scale data join, the transitive closure computation and the equivalent relation computation. In addition to above optimizations at the reasoning algorithm level, we also optimized the inner Spark execution mechanism by proposing an off-heap memory storage mechanism for RDD. This system-level optimization patch has been accepted and integrated into Apache Spark 1.0. The experimental results show that Cichlid is around 10 times faster on average than the state-of-the-art distributed reasoning systems for both large scale synthetic and real-world benchmarks. The proposed reasoning algorithms and engine also achieve excellent scalability and fault tolerance.
international conference on big data | 2014
Rong Gu; Wei Hu; Yihua Huang
In the Big Data era, the ever-increasing RDF data have reached a scale in billions of triples and brought obstacles and challenges to single-node RDF data stores. As a result, many distributed RDF stores have been emerging in the Semantic Web community recently. However, currently published ones are either not enough efficient on performance or failed to achieve flexible scalability. In this paper, we propose Rainbow, a scalable and efficient RDF triple store. The RDF data indexing scheme in Rainbow is a hybrid one which is designed based on the statistical analysis of user query space. Further, to better support the hybrid indexing scheme, Rainbow adopts a distributed and hierarchical storage architecture that uses HBase as the scalable persistent storage and combines a distributed memory storage to speedup query performance. The RDF data in memory storage is partitioned by the consistent hashing algorithm to achieve the dynamic scalability. Experiments show that Rainbow outperforms typical existing distributed RDF triple stores, with excellent scalability and fault tolerance.
international conference on advanced cloud and big data | 2013
Wenhui Zhou; Chunfeng Yuan; Rong Gu; Yihua Huang
Large scale approximate k-nearest neighbors search is an important and very useful technique for many multimedia retrieval applications. Most of existing search algorithms used the centralized indexing approaches and thus cannot meet the needs to search upon large scale datasets. This paper proposes an efficient and distributed approximate k-nearest neighbors search algorithm over a billion high-dimensional visual descriptors. We propose a randomized partitioning strategy and then design a two-layer distributed indexing scheme based on a neighborhood graph for large scale k-nearest neighbors search. The experimental results show that our method achieves excellent performance and scalability.
international conference on big data | 2015
Rong Gu; Yun Tang; Zhaokang Wang; Shuai Wang; Xusen Yin; Chunfeng Yuan; Yihua Huang
Matrix computation is the core of many massive data-intensive analytical applications such mining social networks, recommendation systems and nature language processing. Due to the importance of matrix computation, it has been widely studied for many years. In the Big Data ear, as the scale of the matrix grows, traditional single-node matrix computation systems can hardly cope with such large data and computation. Existing distributed matrix computation solutions are still not efficient enough, or have poor fault tolerance and usability. In this paper, we propose Marlin, an efficient distributed matrix computation library which is built on top of Spark. Marlin contains several distributed matrix operation algorithms and provides high-level matrix computation primitives for users. In Marlin, we proposed three distributed matrix multiplication algorithms for different situations. Based on this, we designed an adaptive model to choose the best approach for different problems. Moreover, to improve the computation performance, instead of naively using Spark, we put forward some optimizations including taking advantage of the native linear algebra library, reducing shuffle communication and increasing parallelism. Experimental results show that Marlin is over an order of magnitude faster than R (a widely-used statistical computing system) and the existing distributed matrix operation algorithms based on MapReduce. Moreover, Marlin achieves comparable performance to the specialized MPI-based matrix multiplication algorithm SUMMA but uses a general dataflow engine and gains common dataflow features such as scalability and fault tolerance.
international conference on algorithms and architectures for parallel processing | 2015
Zhaokang Wang; Shiqing Fan; Rong Gu; Chunfeng Yuan; Yihua Huang
R is a widely-used statistical programming language in the data science community. However, in the big data era, R faces the challenges from large scale data analysis tasks. It lacks the ability of distributed linear algebra computation in its local interactive shell. In this paper, we propose iPLAR, a system that runs in the interactive R environment, wraps the high performance parallel linear algebra library, and provides a group of easy-to-use interfaces. iPLAR adopts the client-server model to uncouple the interactive shell from the ScaLAPACK/MPI distributed computing backend. In addition, it provides R users with a group of parallel-detail-transparent interfaces that are similar to the native R linear algebra interfaces. We evaluate the efficiency of iPLAR with representative basic matrix operations and two widely-used machine learning algorithms. Experimental results show that iPLAR achieves the near-linear data scalability and enhances the interactive processing capability of R to large problem scales.