Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei Lo is active.

Publication


Featured researches published by Wei Lo.


international conference on web services | 2012

Collaborative Web Service QoS Prediction with Location-Based Regularization

Wei Lo; Jianwei Yin; Shuiguang Deng; Ying Li; Zhaohui Wu

Predicting the Quality of Service (QoS) values is important since they are widely applied to Service-Oriented Computing (SOC) research domain. Previous research works on this problem do not consider the influence of user location information carefully, which we argue would contribute to improving prediction accuracy due to the nature of Web services invocation process. In this paper, we propose a novel collaborative QoS prediction framework with location-based regularization (LBR). We first elaborate the popular Matrix Factorization (MF) model for missing values prediction. Then, by taking advantage of the local connectivity between Web services users, we incorporate geographical information to identify the neighborhood. Different neighborhood situations are considered to systematically design two location-based regularization terms, i.e. LBR1 and LBR2. Finally we combine these regularization terms in classic MF framework to build two unified models. The experimental analysis on a large-scale real-world QoS dataset shows that our methods improve 23.7% in prediction accuracy compared with other state-of-the-art algorithms in general cases.


ieee international conference on services computing | 2012

An Extended Matrix Factorization Approach for QoS Prediction in Service Selection

Wei Lo; Jianwei Yin; Shuiguang Deng; Ying Li; Zhaohui Wu

With the growing adoption of Web services on the World Wide Web, the issue of QoS-based service selection is becoming important. A common hypothesis of previous research is that the QoS information to the current user is supposed all known and accurate. However, the real case is that there are many missing QoS values in history records. To avoid the expensive and costly Web services invocations, this paper proposes an extended Matrix Factorization (EMF) framework with relational regularization to make missing QoS values prediction. We first elaborate the Matrix Factorization (MF) model from a general perspective. To collect the wisdom of crowds precisely, we employ different similarity measurements on user side and service side to identify neighborhood. And then we systematically design two novel relational regularization terms inside a neighborhood. Finally we combine both terms into a unified MF framework to predict the missing QoS values. To validate our methods, experiments on real Web services data are conducted. The empirical analysis shows that our approaches outperform other state-of-the-art methods in QoS prediction accuracy.


Engineering Applications of Artificial Intelligence | 2015

Efficient web service QoS prediction using local neighborhood matrix factorization

Wei Lo; Jianwei Yin; Ying Li; Zhaohui Wu

Abstract In the era of Big Data, companies worldwide are actively deploying web services in both intranet and internet environments. Quality-of-Service (QoS), the fundamental aspect of web service has thus attracted numerous attention in industry and academia. The study on sufficient QoS data keeps advancing the state in Service-Oriented Computing (SOC) area. To collect a large amount of resource in practice, QoS prediction applications are designed and built. Nevertheless, how to generate accurate results in high productivity is still a main challenge to existing frameworks. In this paper, we propose LoNMF, a Local Neighborhood Matrix Factorization application that incorporates domain knowledge in modern Artificial Intelligence (AI) technique to tackle this challenge. LoNMF first proposes a two-level selection mechanism that can identify a set of highly relevant local neighbors for target user. And then, it integrates the geographical information to build up an extended Matrix Factorization (MF) approach for personalized QoS prediction. Finally, it iteratively generates results by utilizing hints from previous round computations, a gradient boosting strategy that directly accelerates solving process. Experimental evidence on large-scale real-world QoS data shows that LoNMF is scalable, and consistently outperforming other state-of-the-art applications in prediction accuracy and efficiency.


web information systems engineering | 2013

Personalized Location-Aware QoS Prediction for Web Services Using Probabilistic Matrix Factorization

Yueshen Xu; Jianwei Yin; Wei Lo; Zhaohui Wu

QoS prediction is critical to Web service selection and recommendation, with the extensive adoption of Web services. But as one of the important factors influencing QoS values, the geographical information of users has been ignored before by most works. In this paper, we first explicate how Probabilistic Matrix Factorization (PMF) model can be employed to learn the predicted QoS values. Then, by identifying user neighbors on the basis of geographical location, we take the effect of neighbors’ experience of Web service invocation into consideration. Specifically, we propose two models based on PMF, i.e. L-PMF and WL-PMF, which integrate the feature vectors of neighbors into the learning process of latent user feature vectors. Finally, extensive experiments conducted in the real-world dataset demonstrate that our models outperform other well-known approaches consistently.


international conference on service oriented computing | 2013

A Unified Framework of QoS-Based Web Service Recommendation with Neighborhood-Extended Matrix Factorization

Yueshen Xu; Jianwei Yin; Wei Lo

QoS-based Web service recommendation is a widely employed way of selecting proper candidate services that can provide better QoS to users. In this paper, based on Matrix Factorization (MF) model, at first, we propose the service neighborhood extended MF model (SN-EMF) and user neighborhood-extended MF model (UN-EMF), in which we integrate two types of valuable information respectively. One is the historical QoS records of the k most similar neighbors, and the other is the geographical location information. Then, with aggregating the results together, we unify the two models into a unified service recommending framework (U-EMF), which can be divided into two parts of offline and online. Moreover, we also discuss some strategies about model selection and simplification. In the end, we conduct sufficient experiments, showing the effectiveness of our three models, especially the relatively large performance improvement of U-EMF model.


international congress on big data | 2016

From Big Data to Great Services

Jianwei Yin; Yan Tang; Wei Lo; Zhaohui Wu

Big Data is increasingly adopted by a wide range of service industries to improve the quality and value of their services, e.g., inventory that matches well the supply and demand, and pricing that reflects well the market needs. Customers benefit from higher quality of service enabled by Big Data. Service providers get higher profits from more precise control of costs and accurate knowledge of customer needs. In this paper, we define the next generation high quality services as Great Services, characterized by 4P Quality-of-Service (QoS) dimensions: Panorama, Penetration, Prediction and Personalization, which go much further than current services. The transformation of Big Data into Great Services would be difficult and expensive without methodical techniques and software tools. We call the intermediate step Deep Knowledge, which is generated by Big Data (with the 4V challenges - Volume, Velocity, Variety, and Veracity) and used in the creation of Great Services. Deep Knowledge is distinguished from traditional Big Data by 4C properties (Complexity, Cross-domain, Customization, and Convergence). In order to achieve the 4P QoS dimensions of Great Services, we need Deep Knowledge with 4C properties. In this paper, we describe an informal characterization of Great Services with 4P QoS dimensions with examples, and outline the techniques and tools that facilitate the transformation of Big Data into Deep Knowledge with 4C properties, and then the use of Deep Knowledge in Great Services.


symposium on reliable distributed systems | 2015

MICS: Mingling Chained Storage Combining Replication and Erasure Coding

Yan Tang; Jianwei Yin; Wei Lo; Ying Li; Shuiguang Deng; Kexiong Dong; Calton Pu

High reliability, low space cost, and efficient read/write performance are all desirable properties for cloud storage systems. Due to the inherent conflicts, however, simultaneously achieving optimality on these properties is unrealistic. Since reliable storage is indispensable prerequisite for services with high availability, tradeoff should therefore be made between space and read/write efficiency when storage scheme is designed. N-way Replication and Erasure Coding, two extensively-used storage schemes with high reliability, adopt opposite strategies on this tradeoff issue. However, unbalanced tradeoff designs of both schemes confine their effectiveness to limited types of workloads and system requirements. To mitigate such applicability penalty, we propose MICS, a MIngling Chained Storage scheme that combines structural and functional advantages from both N-way replication and erasure coding. Qualitatively, MICS provides efficient read/write performance and high reliability at reasonably low space cost. MICS stores each object in two forms: a full copy and certain amount of erasure-coded segments. We establish dedicated read/write protocols for MICS leveraging the unique structural advantages. Moreover, MICS provides high read/write efficiency with Pipeline Random-Access Memory consistency to guarantee reasonable semantics for services users. Evaluation results demonstrate that under same fault tolerance and consistency level, MICS outperforms N-way replication and pure erasure coding in I/O throughput by up to 34.1% and 51.3% respectively. Furthermore, MICS shows superior performance stability over diverse workload conditions, in which case the standard deviation of MICS is 70.1% and 29.3% smaller than those of other two schemes.


international conference on web services | 2015

Accelerated Sparse Learning on Tag Annotation for Web Service Discovery

Wei Lo; Jianwei Yin; Zhaohui Wu

Learning latent features of Web services will greatly boost the ability of search engine to discover relevant services. Extracted information from Web Service Description Language (WSDL) documents of services is less efficient due to the limited usage of data source. Recently, a number of ongoing works have indicated incorporating service tag, a textual symbol provides additional contextual and semantic information, helps to enhance the process of service discovery. However, a large number of relevant tags for Web services are difficult to obtain in practice. In this paper, we propose a Web service Tag Learning system to address this issue. WT Learning system adopts sparse learning technique to fully understand the structure of high dimensional textual information extracted from WSDL documents and tags. Meanwhile, our proposed system implements Alternative Direction Method of Multiplier (ADMM) strategy, which accelerates solving process in Big Data environment. Extensive experiments are conducted based on real-world dataset, which consists of 24,569 Web services. The results demonstrate the effectiveness of WT Learning system. Specifically, our system outperforms other state-of-the-art frameworks in tag classification and recommendation tasks, with 29.6% and 27.1% performance gaining respectively.


high performance computing and communications | 2015

SAUD: Semantics-Aware and Utility-Driven Deduplication Framework for Primary Storage

Yan Tang; Jianwei Yin; Wei Lo

Data deduplication is an efficient technology to reduce storage cost for cloud storage systems, especially when massive volume of data has become normalcy in this era of Big Data. Primary storage, as the direct interaction layer with service users, has reaped the benefit of deduplication technologies due to its expensive manufacturing cost. However, since primary storage is constantly accessed by users, workloads of primary storage systems are mostly latency-sensitive. Such workload feature makes it challenging to develop both performance and space efficient deduplication schemes for primary storage systems. Existing deduplication schemes on primary storage pay little attention to achieving desirable space saving while restraining the inherent performance penalty to a little extent. In this paper, we propose SAUD, a Semantics-Aware and Utility-Driven deduplication framework to provide high space saving with minor performance penalty for primary storage. SAUD delivers performance-oriented deduplication service by leveraging the file-level semantics of primary storage in a quantitative way. SAUD calculates deduplication priority of files with diverse semantics as deduplicating instructions. Moreover, SAUD operates in a selective-on mode by dynamically regulating the deduplication process based on the real-time workload and system status, further reducing the side-effect on system performance. Comprehensive evaluations show that SAUD outperforms all other comparative schemes on system read performance by an average of 54.6%. SAUD manages to achieve 82.1% of the space efficiency achieved by the most space-orient scheme, read performance of which falls behind that of SAUD by as much as 80.1%.


IEEE Transactions on Computers | 2017

ASSER: An Efficient, Reliable, and Cost-Effective Storage Scheme for Object-Based Cloud Storage Systems

Jianwei Yin; Yan Tang; Shuiguang Deng; Ying Li; Wei Lo; Kexiong Dong; Albert Y. Zomaya; Calton Pu

High reliability, efficient I/O performance and flexible consistency provided with low storage cost are all desirable properties of cloud storage systems. Due to the inherent conflicts, however, simultaneously achieving optimum on all these properties is impractical. N-way Replication and Erasure Coding, two extensively-applied storage schemes with high reliability, adopt opposite and unbalanced strategies on the tradeoff among these properties, thus considerably restraining their effectiveness on wide range of workloads. To address the aforementioned obstacle, we propose a novel storage scheme called ASSER, an ASSembling chain of Erasure coding and Replication. ASSER stores each object in two parts: a full copy and a certain amount of erasure-coded segments. We establish dedicated read/write protocols for ASSER leveraging the unique structural advantages. On the basis of elementary protocols, we implement sequential and PRAM (Pipeline-RAM) consistency to make ASSER feasible for various services with different performance/consistency requirements. Evaluation results demonstrate that under the same fault tolerance and consistency level, ASSER outperforms N-way replication and pure erasure coding in I/O throughput under diverse system and workload configurations with superior performance stability. More importantly, ASSER delivers stably efficient I/O performance at much lower storage cost than the other comparatives.

Collaboration


Dive into the Wei Lo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Calton Pu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Naixue Xiong

Colorado Technical University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge