Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiong Luo is active.

Publication


Featured researches published by Xiong Luo.


Future Generation Computer Systems | 2016

A kernel machine-based secure data sensing and fusion scheme in wireless sensor networks for the cyber-physical systems

Xiong Luo; Dandan Zhang; Laurence T. Yang; Ji Liu; Xiaohui Chang; Huansheng Ning

Wireless sensor networks (WSNs) as one of the key technologies for delivering sensor-related data drive the progress of cyber-physical systems (CPSs) in bridging the gap between the cyber world and the physical world. It is thus desirable to explore how to utilize intelligence properly by developing the effective scheme in WSN to support data sensing and fusion of CPS. This paper intends to serve this purpose by proposing a prediction-based data sensing and fusion scheme to reduce the data transmission and maintain the required coverage level of sensors in WSN while guaranteeing the data confidentiality. The proposed scheme is called GM-KRLS, which is featured through the use of grey model (GM), kernel recursive least squares (KRLS), and Blowfish algorithm (BA). During the data sensing and fusion process, GM is responsible for initially predicting the data of next period with a small number of data items, while KRLS is used to make the initial predicted value approximate its true value with high accuracy. The KRLS as an improved kernel machine learning algorithm can adaptively adjust the coefficients with every input, while making the predicted value more close to actual value. And BA is used for data encoding and decoding during the transmission process due to its successful applications across a wide range of domains. Then, the proposed secure data sensing and fusion scheme GM-KRLS can provide high prediction accuracy, low communication, good scalability, and confidentiality. In order to verify the effectiveness and reasonableness of our proposed approach, we conduct simulations on actual data sets that are collected from sensors in the Intel Berkeley research lab. The simulation results have shown that the proposed scheme can significantly reduce redundant transmissions with high prediction accuracy. A novel data sensing and fusion scheme GM-KRLS is proposed in WSNs for the CPSs.GM-KRLS develops a prediction mechanism to reduce redundant transmissions in WSN.GM-KRLS improves the prediction accuracy with a kernel machine learning algorithm.Blowfish algorithm is employed to guarantee the confidentiality in our scheme.


IEEE Transactions on Industrial Informatics | 2017

Fog Computing Based Face Identification and Resolution Scheme in Internet of Things

Pengfei Hu; Huansheng Ning; Tie Qiu; Yanfei Zhang; Xiong Luo

The identification and resolution technology are the prerequisite for realizing identity consistency of physical–cyber space mapping in the Internet of Things (IoT). Face, as a distinctive noncoded and unstructured identifier, has especial advantages in identification applications. With the increase of face identification based applications, the requirements for computation, communication, and storage capability are becoming higher and higher. To solve this problem, we propose a fog computing based face identification and resolution scheme. Face identifier is first generated by the identification system model to identify an individual. Then, a fog computing based resolution framework is proposed to efficiently resolve the individuals identity. Some computing overhead is offloaded from a cloud to network edge devices in order to improve processing efficiency and reduce network transmission. Finally, a prototype system based on local binary patterns (LBP) identifier is implemented to evaluate the scheme. Experimental results show that this scheme can effectively save bandwidth and improve efficiency of face identification and resolution.


Sensors | 2017

Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

Yang Xu; Xiong Luo; Weiping Wang; Wenbing Zhao

Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes.


Neurocomputing | 2016

Regression and classification using extreme learning machine based on L1-norm and L2-norm

Xiong Luo; Xiaohui Chang; Xiaojuan Ban

Extreme learning machine (ELM) is a very simple machine learning algorithm and it can achieve a good generalization performance with extremely fast speed. Therefore it has practical significance for data analysis in real-world applications. However, it is implemented normally under the empirical risk minimization scheme and it may tend to generate a large-scale and over-fitting model. In this paper, an ELM model based on L1-norm and L2-norm regularizations is proposed to handle regression and multiple-class classification problems in a unified framework. The proposed model called L1-L2-ELM combines the grouping effect benefits of L2 penalty and the tendency towards sparse solution of L1 penalty, thus it can control the complexity of the network and prevent over-fitting. To solve the mixed penalty problem, the separate elastic net algorithm and Bayesian information criterion (BIC) are adopted to find the optimal model for each response variable. We test the L1-L2-ELM algorithm on one artificial case and nine benchmark data sets to evaluate its performance. Simulation results have shown that the proposed algorithms outperform the original ELM as well as other advanced ELM algorithms in terms of prediction accuracy, and it is more robust in both regression and classification applications.


Computer Networks | 2016

A large-scale web QoS prediction scheme for the Industrial Internet of Things based on a kernel machine learning algorithm

Xiong Luo; Ji Liu; Dandan Zhang; Xiaohui Chang

Cloud computing plays an essential role in enabling practical applications based on the Industrial Internet of Things (IIoT). Hence, the quality of these services directly impacts the usability of IIoT applications. To select or recommend the best web and cloud based services, one method is to mine the vast data that are pertinent to the quality of service (QoS) of such services. To enable dynamic discovery and composition of web services, one can use a set of well-defined QoS criteria to describe and distinguish functionally similar web services. In general, QoS is a nonfunctional performance index of web services, and it might be user-dependent. Hence, to fully assess the QoS of all available web services, a user normally would have to invoke every one of them. This implies that the QoS values for services that the user has not invoked would be missing. If the number of web services available is large, it is virtually inevitable for this to happen because invoking every single service would be prohibitively expensive. This issue is typically resolved by employing some predication algorithms to estimate the missing QoS values. In this paper, a data-driven scheme of predicting the missing QoS values for the IIoT based on a kernel least mean square algorithm (KLMS) is proposed. During the data prediction process, the Pearson correlation coefficient (PCC) is initially introduced to find the relevant QoS values from similar service users and web service items for each known QoS entry. Next, KLMS is used to analyze the hidden relationships between all the known QoS data and corresponding QoS data with the highest similarities. We therefore can apply the derived coefficients for the prediction of missing web service QoS values. An extensive performance study based on a public data set is conducted to verify the prediction accuracy of our proposed scheme. This data set includes 200 distributed service users on 500 web service items with a total of 1,858,260 intermediate data values. The experiment results show that our proposed KLMS-based prediction scheme has better prediction accuracy than traditional approaches.


Journal of The Franklin Institute-engineering and Applied Mathematics | 2017

Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy

Xiong Luo; Yang Xu; Weiping Wang; Manman Yuan; Xiaojuan Ban; Yueqin Zhu; Wenbing Zhao

Abstract The stacked extreme learning machine (S-ELM) is an advanced framework of deep learning. It passes the ‘reduced’ outputs of the previous layer to the current layer, instead of directly propagating the previous outputs to the next layer in traditional deep learning. The S-ELM could address some large and complex data problems with a high accuracy and a relatively low requirement for memory. However, there is still room for improvement of the time complexity as well as robustness while using S-ELM. In this article, we propose an enhanced S-ELM by replacing the original principle component analysis (PCA) technique used in this algorithm with the correntropy-optimized temporal PCA (CTPCA), which is robust for outliers rejection and significantly improves the training speed. Then, the CTPCA-based S-ELM performs better than S-ELM in both accuracy and learning speed, when dealing with dataset disturbed by outliers. Furthermore, after integrating the extreme learning machine (ELM) sparse autoencoder (AE) method into the CTPCA-based S-ELM, the learning accuracy is further improved while spending a little more training time. Meanwhile, the sparser and more compact feature information are available by using the ELM sparse AE with more computational efforts. The simulation results on some benchmark datasets verify the effectiveness of our proposed methods.


Entropy | 2017

A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique

Xiong Luo; Jing Deng; Weiping Wang; Jenq-Haur Wang; Wenbing Zhao

Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms.


ubiquitous computing | 2016

A laguerre neural network-based ADP learning scheme with its application to tracking control in the Internet of Things

Xiong Luo; Yixuan Lv; Mi Zhou; Weiping Wang; Wenbing Zhao

Sensory data have becoming widely available in large volume and variety due to the increasing presence and adoption of the Internet of Things. Such data can be tremendously useful if they are processed properly in a timely fashion. They could play a key role in the coordination of industrial production. It is thus desirable to explore an effective and efficient scheme to support data tracking and monitoring. This paper intends to propose a novel automatic learning scheme to improve the tracking efficiency while maintaining or improving the data tracking accuracy. A core strategy in the proposed scheme is the design of Laguerre neural network (LaNN)-based approximate dynamic programming (ADP). As a traditional optimal learning strategy, ADP is a popular approach for data processing. The action neural network (NN) and the critic NN as two important components in ADP have big impact on the performance of ADP. In this paper, a LaNN is employed as the implementation of the action NN in ADP considering Laguerre polynomials’ approximation capability. In addition, this LaNN-based ADP is integrated into an online parameter-tuning framework to optimize those parameters of characteristic model that is used to trace the data in the tracking control system. Meanwhile, this article provides an associated Lyapunov convergence analysis to guarantee a uniformly ultimately boundedness property for tracking errors in the proposed approach. Furthermore, the proposed LaNN-based ADP optimal online parameter-tuning scheme is validated using a temperature dynamic tracking control task. The simulation results demonstrate that the scheme has satisfactory learning performance over time.


Future Generation Computer Systems | 2018

A unified face identification and resolution scheme using cloud computing in Internet of Things

Pengfei Hu; Huansheng Ning; Tie Qiu; Yue Xu; Xiong Luo; Arun Kumar Sangaiah

Abstract In the Internet of Things (IoT), identification and resolution of physical object is crucial for authenticating object’s identity, controlling service access, and establishing trust between object and cloud service. With the development of computer vision and pattern recognition technologies, face has been used as a high-security identification and identity authentication method which has been deployed in various applications. Face identification can ensure the consistency between individual in physical-space and his/her identity in cyber-space during the physical–cyber space mapping. However, face is a non-code and unstructured identifier. With the increase of applications in current big data environment, the characteristic of face identification will result in the growing demands for computation power and storage capacity. In this paper, we propose a face identification and resolution scheme based on cloud computing to solve the above problem. The face identification and resolution system model is presented to introduce the processes of face identifier generation and matching. Then, parallel matching mechanism and cloud computing-based resolution framework are proposed to efficiently resolve face image, control personal data access and acquire individual’s identity information. It makes full use of the advantages of cloud computing to effectively improve computation power and storage capacity. The experimental result of prototype system indicates that the proposed scheme is practically feasible and can provide efficient face identification and resolution service.


IEEE Transactions on Human-Machine Systems | 2017

A Human-Centered Activity Tracking System: Toward a Healthier Workplace

Wenbing Zhao; Roanna Lun; Connor Gordon; Abou-Bakar M. Fofana; Deborah D. Espy; M. Ann Reinthal; Beth Ekelman; Glenn Goodman; Joan Niederriter; Xiong Luo

Lost productivity from lower back injuries in workplaces costs billions of U.S. dollars per year. A significant fraction of such workplace injuries are the result of workers not following best practices. In this paper, we present the design, implementation, and evaluation of a novel computer-vision-based system that aims to increase the workers’ compliance to best practices. The system consists of inexpensive programmable depth sensors, wearable devices, and smart phones. The system is designed to track the activities of consented workers using the depth sensors, alert them discreetly on detection of noncompliant activities, and produce cumulative reports on their performance. Essentially, the system provides a valuable set of services for both workers and administrators toward a healthier and, therefore, more productive workplace. This study advances the state of the art in the following ways: 1) a set of mechanisms that enable nonintrusive privacy-aware selective tracking of consented workers in the presence of people that should not be tracked; 2) a single sign-on worker identification mechanism; 3) a method that provides realtime detection of noncompliant activities; and 4) a usability study that provides invaluable feedback regarding system design and deployment, as well as future areas of improvements.

Collaboration


Dive into the Xiong Luo's collaboration.

Top Co-Authors

Avatar

Wenbing Zhao

Cleveland State University

View shared research outputs
Top Co-Authors

Avatar

Weiping Wang

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Manman Yuan

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Lixiang Li

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Chaomin Luo

University of Detroit Mercy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linlin Liu

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaojuan Ban

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Jürgen Kurths

Potsdam Institute for Climate Impact Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge