Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qinghua Lu is active.

Publication


Featured researches published by Qinghua Lu.


Science in China Series F: Information Sciences | 2014

An OSGi-based flexible and adaptive pervasive cloud infrastructure

Weishan Zhang; Licheng Chen; Xin Liu; Qinghua Lu; Peiying Zhang; Su Yang

There is a trend that different computing paradigms such as cloud computing, pervasive, and mobile computing are converging with each other. Due to this convergence, there arise unprecedented complexities, such as huge number of computing devices, flexibilities and adaptation of service infrastructure (infrastructure elasticity) for fitting dynamics of large smart city applications, expectations of powerful computing and storage capabilities on handhold devices, and so on. Therefore, a supporting infrastructure is needed, which can flexibly switch services at run time and can be used to enhance capabilities for small devices through component/service migrations. In this paper, we propose an elastic open service gateway initiative (OSGi)-based pervasive cloud (OSGi-PC) infrastructure which can make use of both the cloud computing capabilities and the component flexibilities from OSGi. OSGi-PC provides flexible management of component migrations between small devices themselves and powerful nodes in between, which is remaining a critical challenge for enabling mobile clouds. We have evaluated the OSGi-PC in terms of performance for adaptive service provision, and power consumption during service adaptation, performance and power consumption for component migrations in different scenarios, which show the usability of OSGi-PC.


IEEE Software | 2016

Building Pipelines for Heterogeneous Execution Environments for Big Data Processing

Dongyao Wu; Liming Zhu; Xiwei Xu; Sherif Sakr; Daniel Sun; Qinghua Lu

Many real-world data analysis scenarios require pipelining and integration of multiple (big) data-processing and data-analytics jobs, which often execute in heterogeneous environments, such as MapReduce; Spark; or R, Python, or Bash scripts. Such a pipeline requires much glue code to get data across environments. Maintaining and evolving these pipelines are difficult. Pipeline frameworks that try to solve such problems are usually built on top of a single environment. They might require rewriting the original job to take into account a new API or paradigm. The Pipeline61 framework supports the building of data pipelines involving heterogeneous execution environments. It reuses the existing code of the deployed jobs in different environments and provides version control and dependency management that deals with typical software engineering issues. A real-world case study shows its effectiveness. This article is part of a special issue on Software Engineering for Big Data Systems.


ubiquitous computing | 2015

A video cloud platform combing online and offline cloud computing technologies

Weishan Zhang; Liang Xu; Pengcheng Duan; Wenjuan Gong; Qinghua Lu; Su Yang

Abstract With the rapid growth of video data from various sources, like security and transportation surveillance, there arise requirements for both online real-time analysis and offline batch processing of large-scale video data. Existing video processing systems fall short in addressing many challenges in large-scale video processing, for example performance, data storage, and fault tolerance. The emerging cloud computing and big data techniques shed lights to intelligent processing for large-scale video data. This paper proposes a general cloud-based architecture and platform that can provide a robust solution to intelligent analysis and storage for video data, which is named as BiF (Batch processing Integrated with Fast processing) architecture. We have implemented the BiF architecture using both Hadoop platform and Storm platform, which are typical offline batch processing cloud platform and online real-time processing cloud platform, respectively. The proposed architecture can handle continual surveillance video data effectively, where real-time analysis, batch processing, distributed storage and cloud services are seamlessly integrated to meet the requirements of video data processing and management. The evaluations show that the proposed approach is efficient in terms of performance, storage, and fault tolerance.


IEEE Software | 2016

A Deep-Intelligence Framework for Online Video Processing

Weishan Zhang; Liang Xu; Zhongwei Li; Qinghua Lu; Yan Liu

Video data has become the largest source of big data. Owing to video datas complexities, velocity, and volume, public security and other surveillance applications require efficient, intelligent runtime video processing. To address these challenges, a proposed framework combines two cloud-computing technologies: Storm stream processing and Hadoop batch processing. It uses deep learning to realize deep intelligence that can help reveal knowledge hidden in video data. An implementation of this framework combines five architecture styles: service-oriented architecture, publish-subscribe, the Shared Data pattern, MapReduce, and a layered architecture. Evaluations of performance, scalability, and fault tolerance showed the frameworks effectiveness. This article is part of a special issue on Software Engineering for Big Data Systems.


international conference on cloud computing | 2013

Incorporating Uncertainty into In-Cloud Application Deployment Decisions for Availability

Qinghua Lu; Xiwei Xu; Liming Zhu; Len Bass; Zhanwen Li; Sherif Sakr; Paul L. Bannerman; Anna Liu

Cloud consumers have a variety of deployment related techniques, such as auto-scaling policies and recovery strategies, for dealing with the uncertainties in the cloud. Uncertainties can be characterized as stochastic (such as failures, disasters, and workload spikes) and subjective (such as choice among various deployment options). Cloud consumers must consider both stochastic and subjective uncertainties. Analytic support for consumers in selecting appropriate techniques and setting the required parameters in the face of different types of uncertainty is currently limited. In this paper, we propose a set of application availability analysis models that capture subjective uncertainties in addition to stochastic uncertainties. We built and validated the models by using industry best practices on deployment, and actual commercial products for disaster recovery and live migration. Our results show that the models permit more informed and quantitative availability analysis than industry best practices under a wide range of scenarios.


integrated network management | 2011

Support for concurrent adaptation of multiple Web service compositions to maximize business metrics

Qinghua Lu; Vladimir Tosic

Runtime adaptation of Web service compositions can often be done in several ways, so it is necessary to decide which adaptation approach to take. While many research projects studied runtime adaptation of Web service compositions or business processes, this paper presents our unique solutions that maximize business metrics, in cases when several Web service composition instances should be adapted at the same time. We specify all necessary information about possible adaptations and their business metrics as policies in our WS-Policy4MASC language and model the optimization problem in the powerful constraint programming language MiniZinc. Into our MiniZnMASC middleware we integrated new algorithms that determine how to adapt each Web service composition instance so the total business value is maximized, while satisfying all given constraints (e.g., about resource limitations). Experiments with the MiniZnMASC prototype showed that our solutions are feasible, functionally correct, business beneficial, with low performance overhead, and with linear scalability.


autonomic and trusted computing | 2010

MiniMASC: A Framework for Diverse Autonomic Adaptations of Web Service Compositions

Qinghua Lu; Vladimir Tosic

When various technical and business changes related to Web service compositions occur, there is often a need for runtime adaptation with minimal human intervention. While many research projects work on particular types of such adaptation, much more research is needed on decision making for diverse autonomic adaptations, particularly to maximize business (as opposed to technical) metrics. Our MiniMASC middleware is a framework for diverse autonomic adaptations of Web service compositions, focusing on supporting such advanced decision-making algorithms. MiniMASC is relatively simple, light weight, modular, and extensible. It uses the WS-Policy4MASC policy language that can describe all information necessary for different types of adaptation. After presenting our classification of different types of decision making in adaptation of Web service compositions, this paper discusses how MiniMASC (with WS-Policy4MASC) can be used for implementing algorithms maximizing business metrics. Our tests show that the implementation of MiniMASC has a satisfactory performance and scales well.


Software - Practice and Experience | 2017

Resource requests prediction in the cloud computing environment with a deep belief network

Weishan Zhang; Pengcheng Duan; Laurence T. Yang; Feng Xia; Zhongwei Li; Qinghua Lu; Wenjuan Gong; Su Yang

Accurate resource requests prediction is essential to achieve optimal job scheduling and load balancing for cloud Computing. Existing prediction approaches fall short in providing satisfactory accuracy because of high variances of cloud metrics. We propose a deep belief network (DBN)‐based approach to predict cloud resource requests. We design a set of experiments to find the most influential factors for prediction accuracy and the best DBN parameter set to achieve optimal performance. The innovative points of the proposed approach is that it introduces analysis of variance and orthogonal experimental design techniques into the parameter learning of DBN. The proposed approach achieves high accuracy with mean square error of [10−6,10−5], approximately 72% reduction compared with the traditional autoregressive integrated moving average predictor, and has better prediction accuracy compared with the state‐of‐art fractal modeling approach. Copyright


IEEE Transactions on Emerging Topics in Computing | 2016

Non-Intrusive Anomaly Detection With Streaming Performance Metrics and Logs for DevOps in Public Clouds: A Case Study in AWS

Daniel Sun; Min Fu; Liming Zhu; Guoqiang Li; Qinghua Lu

Public clouds are a style of computing platforms, where scalable and elastic Information Technology-enabled capabilities are provided as a service to external customers using Internet technologies. Using public cloud services can reduce costs and increase the choices of technologies, but it also implies limited system information for users. Thus, anomaly detection at user end has to be non-intrusive and hence difficult, particularly during DevOps operations because the impacts from both anomalies and these operations are often indistinguishable, and hence, it is hard to detect the anomalies. In this paper, our work is specific to a successful public cloud, Amazon Web Service, and a representative DevOps operation, rolling upgrade, on which we report our anomaly detection that can effectively detect anomalies. Our anomaly detection requires only metrics data and logs supplied by most public clouds officially. We use support vector machine to train multiple classifiers from monitored data for different system environments, on which the log information can indicate the best suitable classifier. Moreover, our detection aims at finding anomalies over every time interval, called window, such that the features include not only some indicative performance metrics but also the entropy and the moving average of metrics data in each window. Our experimental evaluation systematically demonstrates the effectiveness of our approach.


international conference on cloud computing | 2013

Improving Availability of Cloud-Based Applications through Deployment Choices

Jim Zhanwen Li; Qinghua Lu; Liming Zhu; Len Bass; Xiwei Xu; Sherif Sakr; Paul L. Bannerman; Anna Liu

Deployment choices are critical in determining the availability of applications running in a cloud. But choosing good deployment for various software application components into virtual machines is a challenging task because of potential sharing of components among applications and potential interference from multi-tenancy. This paper presents an approach for improving the availability guarantee of software applications by optimizing the availability, performance and monetary cost trade-offs of different deployment choices. Our approach explicitly considers different classes of application requests during the decision process. The results of our experimental evaluation show that the approach can effectively improve the availability guarantees with little or negligible increase in the performance and monetary cost of the deployment choice.

Collaboration


Dive into the Qinghua Lu's collaboration.

Top Co-Authors

Avatar

Weishan Zhang

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Liming Zhu

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Xiwei Xu

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Zhongwei Li

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Liu

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pengcheng Duan

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Sherif Sakr

King Saud bin Abdulaziz University for Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge