Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhong Tang is active.

Publication


Featured researches published by Zhong Tang.


IEEE Transactions on Control Systems and Technology | 2005

The effect of time delays on the stability of load balancing algorithms for parallel computations

John Chiasson; Zhong Tang; J. Ghanem; Chaouki T. Abdallah; J.D. Birdwell; Majeed M. Hayat; H. Jerez

A deterministic dynamic nonlinear time-delay system is developed to model load balancing in a cluster of computer nodes used for parallel computations. The model is shown to be self consistent in that the queue lengths cannot go negative and the total number of tasks in all the queues and the network are conserved (i.e., load balancing can neither create nor lose tasks). Further, it is shown that using the proposed load balancing algorithms, the system is stable in the sense of Lyapunov. Experimental results are presented and compared with the predicted results from the analytical model. In particular, simulations of the models are compared with an experimental implementation of the load balancing algorithm on a distributed computing network.


Archive | 2004

Dynamic Time Delay Models for Load Balancing. Part I: Deterministic Models

J. Douglas Birdwell; John Chiasson; Zhong Tang; Chaouki T. Abdallah; Majeed M. Hayat; Tse-Wei Wang

Parallel computer architectures utilize a set of computational elements (CE) to achieve performance that is not attainable on a single processor, or CE, computer. A common architecture is the cluster of otherwise independent computers communicating through a shared network. To make use of parallel computing resources, problems must be broken down in to smaller units that can be solved individually by each CE while exchanging information with CEs solving other problems.


International Journal of Systems Science | 2003

Linear time delay model for studying load balancing instabilities in parallel computations

Chaouki T. Abdallah; N. Alluri; J.D. Birdwell; John Chiasson; V. Chupryna; Zhong Tang; Tse-Wei Wang

A linear time-delay system is proposed to model load balancing in a cluster of computer nodes used for parallel computations. The linear model is analysed for stability in terms of the delays in the transfer of information between nodes and the gains in the load balancing algorithm. This model is compared with an experimental implementation of the algorithm on a parallel computer network.


conference on decision and control | 2003

The effect of time delays in the stability of load balancing algorithms for parallel computations

J.D. Birdwell; John Chiasson; Chaouki T. Abdallah; Zhong Tang; Nivedita Alluri; Tse-Wei Wang

Deterministic dynamic nonlinear time-delay systems are developed to model load balancing in a cluster of computer nodes used for parallel computations. The model is shown to be self consistent in that the queue lengths cannot go negative and the total number of tasks in all the queues are conserved (i.e., load balancing can neither create nor lose tasks). Further, it is shown that using the proposed load balancing algorithms, the system is stable. Experimental results are presented and compared with the predicted results from the analytical model. In particular, simulations of the models are compared with an experimental implementation of the load balancing algorithm on a parallel computer network.


IFAC Proceedings Volumes | 2001

Load balancing instabilities due to time delays in parallel computations

Chaouki T. Abdallah; J. Douglas Birdwell; John Chiasson; Victor Chupryna; Zhong Tang; Tse-Wei Wang

Abstract A deterministic dynamic linear time-delay model is presented to model load balancing in a cluster of nodes used for parallel computations. The model is analyzed for stability in terms of the delays in the transfer of information between nodes and the gains in the load balancing algorithm.


american control conference | 2006

Resource-constrained load balancing controller for a parallel database

J.D. Birdwell; Zhong Tang; John Chiasson; Chaouki T. Abdallah; Majeed M. Hayat

The critical features of the load balancing problem are the delayed receipt of information and transferred load. Load distribution and task processing contend for the same resources on each computational element. This paper documents experimental results using a previously reported deterministic dynamic nonlinear system for load balancing in a cluster of computer nodes used for parallel computations in the presence of time delays and resource constraints. The model accounts for the trade-off between using processor resources to process tasks and the advantage of distributing the load evenly between the nodes to reduce overall processing time. The control law is implemented as a distributed closed-loop controller to balance the load at each node using not only local estimates of the queue sizes of other nodes, but also estimates of the number of tasks in transit to each node. Experimental results using a parallel DNA database show the superiority of using the controller based on the anticipated work loads to a controller based on local work loads


conference on decision and control | 2004

A time delay model for load balancing with processor resource constraints

Zhong Tang; J.D. Birdwell; John Chiasson; Chaouki T. Abdallah; Majeed M. Hayat

A deterministic dynamic nonlinear time-delay system is developed to model load balancing in a cluster of computer nodes used for parallel computations. This model refines a model previously proposed by the authors to account for the fact that the load balancing operation involves processor time which cannot be used to process tasks. Consequently, there is a trade-off between using processor time/network bandwidth and the advantage of distributing the load evenly between the nodes to reduce overall processing time. The new model is shown to be self consistent in that the queue lengths cannot become negative and the total number of tasks in all the queues Is conserved (i.e., load balancing can neither create nor lose tasks). It is shown that the proposed model is (Lyapunov) stable for any input, but not necessarily asymptotically stable. Experimental results are presented and compared with the predicted results from the analytical model. In particular, simulations of the models are compared with an experimental implementation of the load balancing algorithm on a parallel computer network.


Lecture Notes in Control and Information Sciences | 2007

Modeling and Closed Loop Control for Resource-Constrained Load Balancing with Time Delays in Parallel Computations

Zhong Tang; John White; John Chiasson; J. Douglas Birdwell

Load balancing for parallel computations is modeled as a deterministic dynamic nonlinear time-delay system. This model accounts for the trade-off between using processor time/network bandwidth and the advantage of distributing the load evenly between the nodes to reduce overall processing time. A distributed closed-loop controller is presented to balance load dynamically at each node by using not only the local estimate of the work load of other nodes, but also measurements of the amount of work load in transit. To handle the time varying delays arising in the closed-loop load balancing, a discrete event simulation based on OPNET Modeler is presented and compared with the experiments. Results indicate good agreement between the nonlinear time-delay model and the experiments on a parallel computer network. Moreover, both simulations and experiments show a dramatic increase in performance obtained using the proposed closed loop controller.


conference on decision and control | 2004

Implementation of the load balancing algorithm over a local area network and the internet

J. Ghanem; Chaouki T. Abdallah; Majeed M. Hayat; Sagar Dhakal; J.D. Birdwell; John Chiasson; Zhong Tang

In this paper, experimental evaluation of the load balancing algorithm in real environments is presented. We emphasize the effects of delays on the exchange of information among nodes, and the constraints these effects impose on the design of a load balancing strategy. Two testbeds in two different real environments have been built; the first implementation was over a local area network whereas the second was over Planet-Lab. The results show the effect of network delays and variances in the task processing time on choosing adequate gain values for the load balancing algorithm.


IFAC Proceedings Volumes | 2003

The effect of feedback gains on the perfomance of a load balancing network with time delays

J.D. Birdwell; John Chiasson; Zhong Tang; Chaouki T. Abdallah; Majeed M. Hayat

Abstract A deterministic dynamic nonlinear time-delay systems has been recently developed to model load balancing in a cluster of computer nodes used for parallel computations. Further, it was shown that using the proposed load balancing algorithms, the system is stable independent of the feeback gains. However, the performance of the system is dependent on the feedback gains. Experimental results are presented to show that the value of the feeback gain can be chosen to achieve a non sluggish load balancing response that does not cause the tasks to oscillate between nodes.

Collaboration


Dive into the Zhong Tang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tse-Wei Wang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

N. Alluri

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

John White

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sagar Dhakal

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge