Parminder Flora
BlackBerry Limited
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Parminder Flora.
international conference on software maintenance | 2009
Zhen Ming Jiang; Ahmed E. Hassan; Gilbert Hamann; Parminder Flora
The goal of a load test is to uncover functional and performance problems of a system under load. Performance problems refer to the situations where a system suffers from unexpectedly high response time or low throughput. It is difficult to detect performance problems in a load test due to the absence of formally-defined performance objectives and the large amount of data that must be examined. In this paper, we present an approach which automatically analyzes the execution logs of a load test for performance problems. We first derive the systems performance baseline from previous runs. Then we perform an in-depth performance comparison against the derived performance baseline. Case studies show that our approach produces few false alarms (with a precision of 77%) and scales well to large industrial systems.
international conference on software maintenance | 2008
Zhen Ming Jiang; Ahmed E. Hassan; Gilbert Hamann; Parminder Flora
Many software applications must provide services to hundreds or thousands of users concurrently. These applications must be load tested to ensure that they can function correctly under high load. Problems in load testing are due to problems in the load environment, the load generators, and the application under test. It is important to identify and address these problems to ensure that load testing results are correct and these problems are resolved. It is difficult to detect problems in a load test due to the large amount of data which must be examined. Current industrial practice mainly involves time-consuming manual checks which, for example, grep the logs of the application for error messages. In this paper, we present an approach which mines the execution logs of an application to uncover the dominant behavior (i.e., execution sequences) for the application and flags anomalies (i.e., deviations) from the dominant behavior. Using a case study of two open source and two large enterprise software applications, we show that our approach can automatically identify problems in a load test. Our approach flags < 0.01% of the log lines for closer analysis by domain experts. The flagged lines indicate load testing problems with a relatively small number of false alarms. Our approach scales well for large applications and is currently used daily in practice.
international conference on quality software | 2010
King Chun Foo; Zhen Ming Jiang; Bram Adams; Ahmed E. Hassan; Ying Zou; Parminder Flora
Performance regression testing detects performance regressions in a system under load. Such regressions refer to situations where software performance degrades compared to previous releases, although the new version behaves correctly. In current practice, performance analysts must manually analyze performance regression testing data to uncover performance regressions. This process is both time-consuming and error-prone due to the large volume of metrics collected, the absence of formal performance objectives and the subjectivity of individual performance analysts. In this paper, we present an automated approach to detect potential performance regressions in a performance regression test. Our approach compares new test results against correlations pre-computed performance metrics extracted from performance regression testing repositories. Case studies show that our approach scales well to large industrial systems, and detects performance problems that are often overlooked by performance analysts.
international conference on performance engineering | 2012
Thanh H. D. Nguyen; Bram Adams; Zhen Ming Jiang; Ahmed E. Hassan; Mohamed N. Nasser; Parminder Flora
The goal of performance regression testing is to check for performance regressions in a new version of a software system. Performance regression testing is an important phase in the software development process. Performance regression testing is very time consuming yet there is usually little time assigned for it. A typical test run would output thousands of performance counters. Testers usually have to manually inspect these counters to identify performance regressions. In this paper, we propose an approach to analyze performance counters across test runs using a statistical process control technique called control charts. We evaluate our approach using historical data of a large software team as well as an open-source software project. The results show that our approach can accurately identify performance regressions in both software systems. Feedback from practitioners is very promising due to the simplicity and ease of explanation of the results.
international conference on software engineering | 2014
Tse-Hsun Chen; Weiyi Shang; Zhen Ming Jiang; Ahmed E. Hassan; Mohamed N. Nasser; Parminder Flora
Object-Relational Mapping (ORM) provides developers a conceptual abstraction for mapping the application code to the underlying databases. ORM is widely used in industry due to its convenience; permitting developers to focus on developing the business logic without worrying too much about the database access details. However, developers often write ORM code without considering the impact of such code on database performance, leading to cause transactions with timeouts or hangs in large-scale systems. Unfortunately, there is little support to help developers automatically detect suboptimal database accesses. In this paper, we propose an automated framework to detect ORM performance anti-patterns. Our framework automatically flags performance anti-patterns in the source code. Furthermore, as there could be hundreds or even thousands of instances of anti-patterns, our framework provides sup- port to prioritize performance bug fixes based on a statistically rigorous performance assessment. We have successfully evaluated our framework on two open source and one large-scale industrial systems. Our case studies show that our framework can detect new and known real-world performance bugs and that fixing the detected performance anti- patterns can improve the system response time by up to 98%.
workshop on software and performance | 2008
Dharmesh Thakkar; Ahmed E. Hassan; Gilbert Hamann; Parminder Flora
Techniques for performance modeling are broadly classified into measurement, analytical and simulation based techniques. Measurement based performance modeling is commonly adopted in practice. Measurement based modeling requires the execution of a large number of performance tests to build accurate performance models. These performance tests must be repeated for every release or build of an application. This is a time consuming and error-prone manual process. In this paper, we present a framework for the systematic and automated building of measurement based performance models. The framework is based on our experience in performance modeling of two large applications: the DVD Store application by Dell and another larger enterprise application. We use the Dell DVD Store application as a running example to demonstrate the various steps in our framework. We present the benefits and shortcomings of our framework. We discuss the expected reduction in effort due to adopting our framework.
Journal of Software: Evolution and Process | 2014
Weiyi Shang; Zhen Ming Jiang; Bram Adams; Ahmed E. Hassan; Michael W. Godfrey; Mohamed N. Nasser; Parminder Flora
A great deal of research in software engineering focuses on understanding the dynamic nature of software systems. Such research makes use of automated instrumentation and profiling techniques after fact, i.e., without considering domain knowledge. In this paper, we turn our attention to another source of dynamic information, i.e., the Communicated Information (CI) about the execution of a software system. Major examples of CI are execution logs and system events. They are generated from statements that are inserted intentionally by domain experts (e.g., developers or administrators) to convey crucial points of interest. The accessibility and domain-driven nature of the CI make it a valuable source for studying the evolution of a software system. In a case study on one large open source and one industrial software system, we explore the concept of CI and its evolution by mining the execution logs of these systems. Our study illustrates the need for better trace ability techniques between CI and the Log Processing Apps that analyze the CI. In particular, we find that the CI changes at a rather high rate across versions, leading to fragile Log Processing Apps. 40% to 60% of these changes can be avoided and the impact of 15% to 50% of the changes can be controlled through the use of the robust analysis techniques by Log Processing Apps. We also find that Log Processing Apps that track implementation-level CI (e.g., performance analysis) are more fragile than Log Processing Apps that track domain-level CI (e.g., workload modeling), because the implementation-level CI is often short-lived.
conference on software maintenance and reengineering | 2010
Haroon Malik; Zhen Ming Jiang; Bram Adams; Ahmed E. Hassan; Parminder Flora; Gilbert Hamann
Load testing is crucial to uncover functional and performance bugs in large-scale systems. Load tests generate vast amounts of performance data, which needs to be compared and analyzed in limited time across tests. This helps performance analysts to understand the resource usage of an application and to find out if an application is meeting its performance goals. The biggest challenge for performance analysts is to identify the few important performance counters in the highly redundant performance data. In this paper, we employed a statistical technique, Principal Component Analysis (PCA) to reduce the large volume of performance counter data, to a smaller, more meaningful and manageable set. Furthermore, our methodology automates the process of comparing the important counters across load tests to identify performance gains/losses. A case study on load test data of a large enterprise application shows that our methodology can effectively guide performance analysts to identify and compare top performance counters across tests in limited time.
international conference on software engineering | 2008
Ahmed E. Hassan; Daryl Joseph Martin; Parminder Flora; Paul Mansfield; Dave Dietz
Large customers commonly request on-site capacity testing before upgrading to a new version of a mission critical telecom application. Customers fear that the new version cannot handle their current workload. These on-site engagements are costly and time consuming. These engagements prolong the upgrade cycle for products and reduce the revenue stream of rapidly growing companies. We present an industrial case study for a lightweight simple approach for customizing the operational profile for a particular deployment. The approach flags sequences of repeated events out of millions of events in execution logs. A performance engineer can identify noteworthy usage scenarios using these flagged sequences. The identified scenarios are used to customize the operational profile. Using a customized profile for performance testing alleviates customers concerns about the performance of a new version of an application, and results in more realistic performance and reliability estimates. The simplicity of our approach ensures that customers can easily grasp the results of our analysis over other more complex analysis approaches. We demonstrate the feasibility and applicability of our approach by customizing the operational profile of an enterprise telecom application.
working conference on reverse engineering | 2011
Weiyi Shang; Zhen Ming Jiang; Bram Adams; Ahmed E. Hassan; Michael W. Godfrey; Mohamed N. Nasser; Parminder Flora
A great deal of research in software engineering focuses on understanding the dynamic nature of software systems. Such research makes use of automated instrumentation and profiling techniques after fact, i.e., without considering domain knowledge. In this paper, we turn our attention to another source of dynamic information, i.e., the Communicated Information (CI) about the execution of a software system. Major examples of CI are execution logs and system events. They are generated from statements that are inserted intentionally by domain experts (e.g., developers or administrators) to convey crucial points of interest. The accessibility and domain-driven nature of the CI make it a valuable source for studying the evolution of a software system. In a case study on one large open source and one industrial software system, we explore the concept of CI and its evolution by mining the execution logs of these systems. Our study illustrates the need for better trace ability techniques between CI and the Log Processing Apps that analyze the CI. In particular, we find that the CI changes at a rather high rate across versions, leading to fragile Log Processing Apps. 40% to 60% of these changes can be avoided and the impact of 15% to 50% of the changes can be controlled through the use of the robust analysis techniques by Log Processing Apps. We also find that Log Processing Apps that track implementation-level CI (e.g., performance analysis) are more fragile than Log Processing Apps that track domain-level CI (e.g., workload modeling), because the implementation-level CI is often short-lived.