Sarbjeet Singh
Panjab University, Chandigarh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sarbjeet Singh.
International Journal of Computer Applications | 2013
Lovejit Singh; Sarbjeet Singh
Cloud Computing refers to a paradigm whereby services are offered via internet using pay as you go model. Services are deployed in data centers and the pool of data centers is collectively referred to as “Cloud”. Data centers make use of scheduling techniques to optimally allocate resources to various jobs. Different scenarios require different scheduling algorithms. The selection of a particular scheduling algorithm depends upon various factors like the parameter to be optimized (cost or time), quality of service to be provided and information available regarding various aspects of job. Workflow applications are the applications which require various sub-tasks to be executed in a particular fashion in order to complete the whole task. These tasks have parent child relationship. The parent task needs to be executed before its child task. Workflow scheduling algorithms are supposed to preserve dependency constraints implied by their nature and structure. Resources are allocated to various sub-tasks of the original task by keeping into account these constraints. In this paper, various workflow scheduling algorithms have been surveyed. Some algorithms have been found to optimize cost, some have been found to optimize time, some focuses on reliability, some focuses on availability, some focuses on energy efficiency, some focuses on load balancing or some focuses on a combination of these parameters. A lot of work has already been done in the area of workflow scheduling but still, we feel that there is a need and lot of scope in applying other optimization techniques, like intelligent water drops, to schedule workflow applications.
advances in computing and communications | 2013
Nitish Chopra; Sarbjeet Singh
Cloud computing provides on demand resources for compute and storage requirements. Private cloud is a good option for cost saving for executing workflow applications but when the resources in private cloud are not enough to meet storage and compute requirements of an application then public clouds are the option left. While public clouds charge users on pay-per-use basis, private clouds are owned by users and can be utilized with no charge. When a public cloud and a private cloud is merged, we get a hybrid cloud. In hybrid cloud, task scheduling is a complex process as jobs can be allocated resources either from private cloud or from public cloud. Deadline based scheduling is the main focus in many of the workflow applications. Proposed algorithm does cost optimization by deciding which resources should be taken on lease from public cloud to complete the workflow execution within deadline. In the proposed work, we have developed a level based scheduling algorithm which executes tasks level wise and it uses the concept of sub-deadline which is helpful in finding best resources on public cloud for cost saving and also completes workflow execution within deadlines. Performance analysis and comparison of the proposed algorithm with min-min approach is also presented.
2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT) | 2013
Nitish Chopra; Sarbjeet Singh
Cloud computing nowadays is playing major role in storage and processing huge tasks with scalability options. Deadline based scheduling is the main focus when we process the tasks using available resources. Private cloud is owned by an organization and resources are free for user whereas public clouds charge users using pay-as-you-go model. When the private cloud is not enough for processing user tasks, resources can be acquired from public cloud. The combination of a public cloud and a private cloud gives rise to hybrid cloud. In hybrid clouds, task scheduling is a complex process as tasks can be allocated resources of either the private cloud or the public cloud. This paper presents an algorithm that decides which resources should be taken on lease from public cloud to complete the workflow execution within deadline and with minimum monetary cost for user. A hybrid scheduling algorithm has been proposed which uses a new concept of sub-deadline for rescheduling and allocation of resources in public cloud. The algorithm helps in finding best resources on public cloud for cost saving and complete workflow execution within deadlines. Three rescheduling policies have been evaluated in this paper. For performance analysis, we have compared the HEFT (Heterogeneous Earliest Finish Time) based hybrid scheduling algorithm with greedy approach and min-min approach. Results have shown that the proposed algorithm optimizes a large amount of cost compared to greedy and min-min approaches and completes all tasks within deadline.
International Journal of Computer Applications | 2010
Sarbjeet Singh; Manjit Thapa; Sukhvinder Singh; Gurpreet Singh
Survey of reusability based on software components that provide the assistance to the developer in the development of software. Reusability of software is an important prerequisite for cost and time-optimized software development Work in software reuse focuses on reusing artifacts. The paper discusses the reusability concepts for Component based Systems and explores several existing metrics for both white-box and black box components to measure reusability directly or indirectly and presents the special requirements on software in this domain and Reusability is about building a library of frequently used components, thus allowing new programs to be assembled quickly from existing components. Component-Based Systems (CBS) have now become more generalized approach for application development. Keyword: Tools of reusability, Components of reuse, Reusability matrices.
Future Generation Computer Systems | 2017
Sarbjeet Singh; Jagpreet Sidhu
Abstract This paper addresses the problem of determining trustworthiness of Cloud Service Providers (CSPs) in a cloud environment. For the current work, trustworthiness is defined as the degree of compliance of a CSP to the promised quantitative QoS parameters as defined in the Service Level Agreement (SLA). Due to large number of CSPs offering similar kinds of services in the cloud environment, it has become a challenging task for Cloud Clients (CCs) to identify and differentiate between the trustworthy and untrustworthy CSPs. At present, there is no trust evaluation system that allows CCs to evaluate the trustworthiness of CSPs on the basis of their adherence to the SLA i.e. the compliance provided by the CSPs to CCs as per the SLAs. This paper proposes a Compliance-based Multi-dimensional Trust Evaluation System (CMTES) that enables CCs to determine the trustworthiness of a CSP from different perspectives, as trust is a subjective concept. Such a system is of great help to CCs who want to choose a CSP from a pool of CSPs, satisfying their desired QoS requirements. The framework enables us to evaluate the trustworthiness of a CSP from the CC’s perspective, Cloud Auditor’s perspective, Cloud Broker’s perspective and Peers’ perspective. Experimental results show that the CMTES is effective and stable in differentiating trustworthy and untrustworthy CSPs. The validation of the CMTES has been done with the help of synthetic data due to lack of standardized dataset and its applicability has been demonstrated with the help of a case study involving the use of real cloud data.
International Conference on Security in Computer Networks and Distributed Systems | 2014
Lovejit Singh; Sarbjeet Singh
Cloud Computing refers to application and services offered over Internet using pay-as-you-go model. The services are offered from data centers all over the world, which jointly are referred to as the “Cloud”. The data centers use scheduling techniques to effectively allocate virtual machines to cloud applications. The cloud applications in area such as business enterprises, bio-informatics and astronomy need workflow processing in which tasks are executed based on data dependencies. The cloud users impose QoS constraints while executing their workflow applications on cloud. The QoS parameters are defined in SLA (Service Level Agreement) document which is signed between cloud user and cloud provider. In this paper, a genetic algorithm has been proposed that schedules workflow applications in unreliable cloud environment and meet user defined QoS constraints. A budget constrained time minimization genetic algorithm has been proposed which reduces the failure rate and makespan of workflow applications. It allocates those resources to workflow application which are reliable and cost of execution is under user budget. The performance of genetic algorithm has been compared with max-min and min-min scheduling algorithms in unreliable cloud environment.
grid computing | 2017
Jagpreet Sidhu; Sarbjeet Singh
Trust is one of the most challenging issues in the emerging cloud computing era. Over the past few years, numerous cloud service providers have been emerged providing similar kinds of services. It has become incredibly complex for the cloud clients to make a distinction among multiple cloud service providers offering similar kinds of services. Cloud clients need trustworthy service providers who provide services exactly as per the SLA and do not deviate from their promises. Though, there have been significant efforts to form trust between service providers and the clients by providing data, storage and network security, but relatively fewer efforts have been made in the field of trustworthiness determination by monitoring the compliance of offered services as per the SLA. This paper presents the design of a trust evaluation framework that uses the compliance monitoring mechanism to determine the trustworthiness of service providers. The compliance values are computed and then processed using a technique known as the Improved Technique for Order of Preference by Similarity to Ideal Solution (Improved TOPSIS) to obtain trust on the service providers. Case study based approach has been followed to demonstrate the usability and the applicability of the proposed framework. Experiments have been performed using the real cloud data extracted from the Cloud Harmony reports. From the experimental results, it is clear that the proposed framework can be used in real cloud environments to determine the trustworthiness of service providers by employing the real time monitoring of their services.
international conference on recent advances in engineering computational sciences | 2014
Sarbjeet Singh; Deepak Chand
Establishing trust among service providers and service consumers is very essential and important factor for the widespread use and popularity of any distributed environment which offers services to users through pay-per-use model. On a cloud, virtually any kind of service can be offered and accessed from anywhere anytime. Services available through a cloud are scalable, reliable and more cost effective. Determining trust on a service provider is a complicated and error-prone task because trust means different things to different users and trust depends on several parameters linked with service providers behavior like his honesty, attitude, reputation, accuracy, reliability of services provided etc. In this paper we have proposed a trust evaluation mechanism which calculates final trust on a service provider based on a users past experiences with service provider and based on friends and third partys recommendations. The mechanism has been simulated and its applicability, flexibility and robustness have been demonstrated through a typical cloud computing scenario. The simulation results show that the mechanism is workable and can be applied to calculate final trust on a service provider in a cloud environment.
Future Generation Computer Systems | 2016
Navneet Kaur Gill; Sarbjeet Singh
In cloud computing, it is important to maintain high data availability and the performance of the system. In order to meet these requirements, the concept of replication is used. As the number of replicas of a data file increases, the data availability and the performance also increases, but at the same time, the cost of creating and maintaining new replicas also increases. In order to enjoy the maximum benefits of replication, it is essential to optimize the cost of replication. The cloud systems are heterogeneous in nature as the different data centers have different policies, hardware and software configurations. As a result of this, the replicas of a data file placed at different data centers have different availabilities and replication costs associated with them. In this paper, a dynamic, cost-aware, optimized data replication strategy is proposed that identifies the minimum number of replicas required to ensure the desired availability. The concept of knapsack has been used to optimize the cost of replication and to re-replicate the replicas from higher-cost data centers to lower-cost data centers, without compromising the data availability. Mathematical descriptions and illustrations have been provided for the different phases of the proposed strategy, keeping in mind the heterogeneous nature of the system. The proposed strategy has been simulated using the CloudSim toolkit. The experimental results indicate that the strategy is effective in optimizing the cost of replication and increasing the data availability. The paper presents a dynamic, cost-aware, optimized data replication strategy.The concept of knapsack has been used to optimize the cost of replication.The strategy involves re-replication also, without compromising data availability.Mathematical descriptions and illustrations have been provided for different phases.The strategy has been simulated and evaluated to demonstrate its effectiveness.
International Journal of Computer Applications | 2010
Sarbjeet Singh; Sukhvinder Singh; Gurpreet Singh
is the likelihood a segment of source code that can be used again to add new functionalities with slight or no modification. Reusable modules and classes reduce implementation time, increase the likelihood that prior testing and use has eliminated bugs and localizes code modifications when a change in implementation is required. Subroutines or functions are the simplest form of reuse. A chunk of code is regularly organized using modules or namespaces into layers. Proponents claim that objects and software components offer a more advanced form of reusability, although it has been tough to objectively measure and define levels or scores of reusability. Reusability implies some explicit management of build, packaging, distribution, installation, configuration, deployment, maintenance and upgrade issues. If these issues are not considered, software may appear to be reusable from design point of view, but will not be reused in practice. This paper presents an empirical study of the software reuse activity by expertdesigners in the context of object-oriented design. Our study focuses on the three following aspects of reuse : (1) the interaction between some design processes, e.g. constructing a problem representation, searching for and evaluating solutions, and reuse processes, i.e. retrieving and using previous solutions, (2) the mental processes involved in reuse, e.g. example-based retrieval or bottom-up versus top-down expanding of the solution, and (3) the mental representations constructed throughout the reuse activity, e.g. dynamic versus static representations.