Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qingshan Liu is active.

Publication


Featured researches published by Qingshan Liu.


IEEE Transactions on Neural Networks | 2008

A One-Layer Recurrent Neural Network With a Discontinuous Hard-Limiting Activation Function for Quadratic Programming

Qingshan Liu; Jun Wang

In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.


IEEE Transactions on Neural Networks | 2013

A One-Layer Projection Neural Network for Nonsmooth Optimization Subject to Linear Equalities and Bound Constraints

Qingshan Liu; Jun Wang

This paper presents a one-layer projection neural network for solving nonsmooth optimization problems with generalized convex objective functions and subject to linear equalities and bound constraints. The proposed neural network is designed based on two projection operators: linear equality constraints, and bound constraints. The objective function in the optimization problem can be any nonsmooth function which is not restricted to be convex but is required to be convex (pseudoconvex) on a set defined by the constraints. Compared with existing recurrent neural networks for nonsmooth optimization, the proposed model does not have any design parameter, which is more convenient for design and implementation. It is proved that the output variables of the proposed neural network are globally convergent to the optimal solutions provided that the objective function is at least pseudoconvex. Simulation results of numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.


Neural Networks | 2012

A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization

Qingshan Liu; Zhishan Guo; Jun Wang

In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed.


systems man and cybernetics | 2011

A One-Layer Recurrent Neural Network for Constrained Nonsmooth Optimization

Qingshan Liu; Jun Wang

This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.


IEEE Transactions on Neural Networks | 2011

Finite-Time Convergent Recurrent Neural Network With a Hard-Limiting Activation Function for Constrained Optimization With Piecewise-Linear Objective Functions

Qingshan Liu; Jun Wang

This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.


IEEE Transactions on Automatic Control | 2015

A Second-Order Multi-Agent Network for Bound-Constrained Distributed Optimization

Qingshan Liu; Jun Wang

This technical note presents a second-order multi-agent network for distributed optimization with a sum of convex objective functions subject to bound constraints. In the multi-agent network, the agents connect each others locally as an undirected graph and know only their own objectives and constraints. The multi-agent network is proved to be able to reach consensus to the optimal solution under mild assumptions. Moreover, the consensus of the multi-agent network is converted to the convergence of a dynamical system, which is proved using the Lyapunov method. Compared with existing multi-agent networks for optimization, the second-order multi-agent network herein is capable of solving more general constrained distributed optimization problems. Simulation results on two numerical examples are presented to substantiate the performance and characteristics of the multi-agent network.


IEEE Transactions on Neural Networks | 2011

A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Subject to Linear Equality Constraints

Zhishan Guo; Qingshan Liu; Jun Wang

In this paper, a one-layer recurrent neural network is presented for solving pseudoconvex optimization problems subject to linear equality constraints. The global convergence of the neural network can be guaranteed even though the objective function is pseudoconvex. The finite-time state convergence to the feasible region defined by the equality constraints is also proved. In addition, global exponential convergence is proved when the objective function is strongly pseudoconvex on the feasible region. Simulation results on illustrative examples and application on chemical process data reconciliation are provided to demonstrate the effectiveness and characteristics of the neural network.


Neural Computation | 2008

A one-layer recurrent neural network with a discontinuous activation function for linear programming

Qingshan Liu; Jun Wang

A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.


IEEE Transactions on Neural Networks | 2005

A delayed neural network for solving linear projection equations and its analysis

Qingshan Liu; Jinde Cao; Youshen Xia

In this paper, we present a delayed neural network approach to solve linear projection equations. The Lyapunov-Krasovskii theory for functional differential equations and the linear matrix inequality (LMI) approach are employed to analyze the global asymptotic stability and global exponential stability of the delayed neural network. Compared with the existing linear projection neural network, theoretical results and illustrative examples show that the delayed neural network can effectively solve a class of linear projection equations and some quadratic programming problems.


systems man and cybernetics | 2010

A Recurrent Neural Network Based on Projection Operator for Extended General Variational Inequalities

Qingshan Liu; Jinde Cao

Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

Collaboration


Dive into the Qingshan Liu's collaboration.

Top Co-Authors

Avatar

Jun Wang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaofu Yang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jiang Xiong

Chongqing Three Gorges University

View shared research outputs
Top Co-Authors

Avatar

Bingrong Xu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Zhang

Chongqing Three Gorges University

View shared research outputs
Top Co-Authors

Avatar

Chuanlin Zhang

Shanghai University of Electric Power

View shared research outputs
Top Co-Authors

Avatar

Long Cheng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xingchen Xu

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge