2019 4th Technology Innovation Management and Engineering Science International Conference (TIMES-iCON) | 2019

Server Scalability Using Kubernetes

 
 
 
 

Abstract


An enterprise that has implemented virtualization can consolidate multiple servers into fewer host servers and get the benefits of reduced space, power, and administrative requirements. Sharing their hosts’ operating system resources, containerization significantly reduces workloads, and is known as a lightweight virtualization. Kubernetes is commonly used to automatically deploy and scale application containers. The scalability of these application containers can be applied to Kubernetes with several supporting parameters. It is expected that the exploitation of scalability will improve performance and server response time to users without reducing server utility capabilities. This research focuses on applying the scalability in Kubernetes and evaluating its performance on overcoming the increasing number of concurrent users accessing academic data. This research employed 3 computers: one computer as the master node and two others as worker nodes. Simulations are performed by an application that generates multiple user behaviors accessing various microservice URLs. Two scenarios were designed to evaluate the CPU load on single and multiple servers.On multiple servers, the server scalability was enabled to serve the user requests. Implementation of scalability to the containers (on multiple servers) reduces the CPU usage pod due to the distribution of loads to containers that are scattered in many workers. Besides CPU load, this research also measured the server s response time in responding user requests. Response time on multiple servers takes longer time than that on single server due to the overhead delay of scaling containers.

Volume None
Pages 1-4
DOI 10.1109/TIMES-iCON47539.2019.9024501
Language English
Journal 2019 4th Technology Innovation Management and Engineering Science International Conference (TIMES-iCON)

Full Text