Hema Srikanth
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hema Srikanth.
international symposium on software reliability engineering | 2010
Sean Banerjee; Hema Srikanth; Bojan Cukic
Software as a Service (SaaS) has gained momentum in the past few years and businesses have been increasingly moving to SaaS model for their IT solutions. SaaS is a newer and transformed model where software is delivered to customers as a service over the web. With the SaaS model, there is a need for service providers to ensure that the services are available and reliable for end users at all times, which introduces significant pressure on the service provider to ensure right test processes and methodologies to minimize any impact to the provisions in Service Level Agreements (SLA). There is lack of research on the unique approaches to reliability analysis of SaaS suites. In this paper, we expand traditional approaches to reliability analysis of traditional web servers and propose methods tailored towards assessing the workload and reliability of SaaS applications. In addition we show the importance of data filtration when assessing SaaS reliability from log files. Finally, we discuss the suitability of reliability measures with respect to their relevance in the context of SLAs.
international symposium on software reliability engineering | 2009
Hema Srikanth; Myra B. Cohen; Xiao Qu
System testing of configurable software is an expensive and resource constrained process. Insufficient testing often leads to escaped faults in the field where failures impact customers and are costly to repair. Prior work has shown that it is possible to efficiently sample configurations for testing using combinatorial interaction testing, and to prioritize these configurations to increase the rate of early fault detection. The underlying assumption to date has been that there is no added complexity to configuring a system level environment over a user configurable one; i.e. the time required to setup and test each individual configuration is nominal. In this paper we examine prioritization of system configurable software driven not only by fault detection but also by the cost of configuration and setup time that moving between different configurations incurs. We present a case study on two releases of an enterprise software system using failures reported in the field. We examine the most effective prioritization technique and conclude that (1) using failure history of configurations can improve the early fault detection rate, but that (2) we must consider fault detection rate over time, not by the number of configurations tested. It is better to test related configurations which incur minimal setup time than to test fewer, more diverse configurations.
international conference on software maintenance | 2011
Hema Srikanth; Myra B. Cohen
Many organizations are moving towards a business model of Software as a Service (SaaS), where customers select and pay for services dynamically via the web. In SaaS, service providers face the challenge of delivering and maintaining high quality software solutions which must continue to work under an enormous number of scenarios; customers can easily subscribe and unsubscribe from services at any point. To date, there has been little research on unique approaches for regression test methodologies for testing in a SaaS environment. In this paper, we present an industrial case study of a regression testing approach to improve test effectiveness and efficiency in SaaS. We model service level use cases from field failures as abstract events and then generate sequences of these for testing to provide a broad coverage of the possible use cases. In subsequent releases of the system we prioritize the tests to improve time to detection of faults in the modified system. We have applied our technique to two releases of a large industrial enterprise level SaaS application and demonstrate that using our approach (1) we could have uncovered escaped faults prior to the system release in both versions of the system; (2) using a priority order we could have improved the efficiency of testing in the first version; and (3) prioritization based on failure history from the first version increases the fault detection rate in the new version, suggesting a correlation between the important sequences in versions that can be leveraged for regression testing.
Journal of Systems and Software | 2012
Hema Srikanth; Sean Banerjee
Software testing is an expensive process consuming at least 50% of the total development cost. Among the types of testing, system testing is the most expensive and complex. Companies are frequently faced with budgetary constraints, which may limit their ability to effectively complete testing efforts before delivering a software product. We build upon prior test case prioritization research and present a system-level approach to test case prioritization called Prioritization of Requirements for Test (PORT). PORT prioritizes system test cases based on four factors for each requirement: customer priority, implementation complexity, fault proneness, and requirements volatility. Test cases for requirements with higher priority based upon a weighted average of these factors are executed earlier in system test. An academic feasibility study and three post hoc industrial studies were conducted. Results indicate that PORT can be used to improve the rate of failure detection when compared with a random and operational profile-driven random approach. Furthermore, we investigated the contribution of the prioritization factors towards the improved rate of failure detection and found customer priority was the most significant contributor. Tool support is provided for the PORT scheme which allows for automatic collection of the four factor values and the resultant test case prioritization.
Information & Software Technology | 2016
Hema Srikanth; Charitha Hettiarachchi; Hyunsook Do
ContextSoftware testing is an expensive and time-consuming process. Software engineering teams are often forced to terminate their testing efforts due to budgetary and time constraints, which inevitably lead to long term issues with quality and customer satisfaction. Test case prioritization (TCP) has shown to improve test effectiveness. ObjectiveThe results of our prior work on requirements-based test prioritization showed improved rate of fault detection on industrial projects; the customer priority (CP) and the fault proneness (FP) were the biggest contributing factors to test effectiveness. The objective of this paper is to further investigate these two factors and apply prioritization based on these factors in a different domain: an enterprise level cloud application. We aim to provide an effective prioritization scheme that practitioners can implement with minimum effort. The other objective is to compare the results and the benefits of these two factors with two risk-based prioritization approaches that extract risks from the system requirements categories. MethodOur approach involved analyzing and assigning values to each requirement based on two important factors, CP and FP, so that the test cases for high-value requirements are prioritized earlier for execution. We also proposed two requirements-based TCP approaches that use risk information of the system. ResultsOur results indicate that the use of CP and FP can improve the effectiveness of TCP. The results also show that the risk-based prioritization can be effective in improving the TCP. ConclusionWe performed an experiment on an enterprise cloud application to measure the fault detection rate of different test suites that are prioritized based on CP, FP, and risks. The results depict that all approaches outperform the random prioritization approach, which is prevalent in the industry. Furthermore, the proposed approaches can easily be used in the industry to address the schedule and budget constraints at the testing phase.
international symposium on object component service oriented real time distributed computing | 2011
Sean Banerjee; Hema Srikanth; Bojan Cukic
The paradigm shift from traditional on-premise software to a service based model has gained significant momentum in the past decade. One such concept, Software as a Service (SaaS), delivers the functionality of traditional on-premise software as a service over the web. While a defect or a malfunction in a traditional on-premise application may affect a single user, the affected user base in a SaaS application may span the entire group of customers serviced by the provider. The physical disconnect between end users and the SaaS applications puts onus on service providers to deliver highly dependable systems that are available and reliable at all times. In this paper, we explore the general challenges faced in delivering and analyzing highly dependable service based systems. We quantify the challenges of dependability assessment utilizing a commercial case study. Furthermore, we explore one facet of dependability assessment related to log entries not necessarily related to dependability. We provide a novel approach to log filtering and show that the removal of benign log entries leads to more realistic system dependability analysis. We also show the need to merge multiple types of SaaS logs to support effective analysis.
Archive | 2009
Hema Srikanth; Gary Denner; Mette Friedel Margareta Hammer; Steve R. Murray
Archive | 2007
Hema Srikanth; Bryan D. Osenbach; Jeffrey B. Sloyer
Archive | 2008
Sean Callanan; Patrick J. O' Sullivan; Hema Srikanth; Carol S. Zimmet
Archive | 2009
Colm Farrell; Liam Harpur; Patrick J. O'Sullivan; Fred Raguillat; Hema Srikanth