Wes Masri
American University of Beirut
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wes Masri.
international conference on software engineering | 2003
Andy Podgurski; David Leon; Patrick Francis; Wes Masri; Melinda Minch; Jiayang Sun; Bin Wang
This paper proposes automated support for classifying reported software failures in order to facilitate prioritizing them and diagnosing their causes. A classification strategy is presented that involves the use of supervised and unsupervised pattern classification and multivariate visualization. These techniques are applied to profiles of failed executions in order to group together failures with the same or similar causes. The resulting classification is then used to assess the frequency and severity of failures caused by particular defects and to help diagnose those defects. The results of applying the proposed classification strategy to failures of three large subject programs are reported These results indicate that the strategy can be effective.
IEEE Transactions on Software Engineering | 2007
Wes Masri; Andy Podgurski; David Leon
Some software defects trigger failures only when certain local or nonlocal program interactions occur. Such interactions are modeled by the closely related concepts of information flows, program dependences, and program slices. The latter concepts underlie a 78 variety of proposed test data adequacy criteria, and they form a potentially important basis for filtering existing test cases. We report the results of an empirical study of several test case filtering techniques that are based on exercising information flows. Both coverage-based and profile-distribution-based filtering techniques are considered. They are compared to filtering techniques based on exercising simpler program elements, such as basic blocks, branches, function calls, and call pairs, with respect to their effectiveness for revealing defects.
international symposium on software reliability engineering | 2004
Wes Masri; Andy Podgurski; David Leon
A new approach to dynamic information flow analysis is presented that can be used to detect and debug insecure flows in programs. It can be applied offline to validate and debug a program against an information flow policy, or, when fast response is not critical, it can be applied online to prevent illegal flows in deployed programs. Since dynamic analysis alone is inherently unable to detect implicit information flows, our approach incorporates a static preprocessing phase that permits detection of most implicit flows at runtime, in addition to explicit ones. To support interactive debugging of insecure flows, it also incorporates a new forward computing algorithm for dynamic slicing, which is more precise than previous forward computing algorithms and is not restricted to programs with structured control flow. A prototype tool implementing the proposed approach has been developed for Java byte code programs. Case studies in which this tool was applied to several subject programs are described.
international conference on software testing, verification, and validation | 2010
Wes Masri; Rawad Abou Assi
Researchers have argued that for failure to be observed the following three conditions must be met: 1) the defect is executed, 2) the program has transitioned into an infectious state, and 3) the infection has propagated to the output. Coincidental correctness arises when the program produces the correct output, while conditions 1) and 2) are met but not 3). In previous work, we showed that coincidental correctness is prevalent and demonstrated that it is a safety reducing factor for coverage-based fault localization. This work aims at cleansing test suites from coincidental correctness to enhance fault localization. Specifically, given a test suite in which each test has been classified as failing or passing, we present three variations of a technique that identify the subset of passing tests that are likely to be coincidentally correct. We evaluated the effectiveness of our techniques by empirically quantifying the following: 1) how accurately did they identify the coincidentally correct tests, 2) how much did they improve the effectiveness of coverage-based fault localization, and 3) how much did coverage decrease as a result of applying them. Using our better performing technique and configuration, the safety and precision of fault-localization was improved for 88% and 61% of the programs, respectively.
international symposium on software testing and analysis | 2009
Wes Masri; Rawad Abou-Assi; Marwa El-Ghali; Nour Al-Fatairi
Coverage-based fault localization techniques typically assign a suspiciousness rank to the statements in a program following an analysis of the coverage of certain types of program elements by the failing and passing runs. The effectiveness of existing techniques has been limited despite the fact that researchers have explored various suspiciousness metrics, ranking strategies, and types of program elements. This work aims at identifying the factors that impair coverage-based fault localization. Specifically, we conducted an empirical study in which we assessed the prevalence of the following scenarios: 1) the condition for failure is met but the program does not fail; 2) the faulty statement is executed but the program does not fail; 3) the failure is correlated with a combination of more than one program element possibly of different types; 4) a large number of program elements occurred in all failing runs but in no passing runs. The study was conducted using 148 seeded versions of ten Java programs which included three releases of NanoXML, and seven programs from the Siemens test suite that were converted to Java. The results showed that most of the above scenarios occur frequently.
ACM Transactions on Software Engineering and Methodology | 2014
Wes Masri; Rawad Abou Assi
Researchers have argued that for failure to be observed the following three conditions must be met: <i>C</i><sub><i>R</i></sub> = the defect was reached; <i>C</i><sub><i>I</i></sub> = the program has transitioned into an infectious state; and <i>C</i><sub><i>P</i></sub> = the infection has propagated to the output. <i>Coincidental Correctness</i> (CC) arises when the program produces the correct output while condition C<sub>R</sub> is met but not C<sub>P</sub>. We recognize two forms of coincidental correctness, weak and strong. In <i>weak CC</i>, C<sub>R</sub> is met, whereas C<sub>I</sub> might or might not be met, whereas in <i>strong</i> <i>CC</i>, both C<sub>R</sub> and C<sub>I</sub> are met. In this work we first show that CC is prevalent in both of its forms and demonstrate that it is a safety reducing factor for <i>Coverage-Based Fault Localization</i> (CBFL). We then propose two techniques for cleansing test suites from coincidental correctness to enhance CBFL, given that the test cases have already been classified as failing or passing. We evaluated the effectiveness of our techniques by empirically quantifying their accuracy in identifying weak CC tests. The results were promising, for example, the better performing technique, using 105 test suites and statement coverage, exhibited 9% false negatives, 30% false positives, and no false negatives nor false positives in 14.3% of the test suites. Also using 73 test suites and more complex coverage, the numbers were 12%, 19%, and 15%, respectively.
ACM Transactions on Software Engineering and Methodology | 2009
Wes Masri; Andy Podgurski
Dynamic information flow analysis (DIFA) was devised to enable the flow of information among variables in an executing program to be monitored and possibly regulated. It is related to techniques like dynamic slicing and dynamic impact analysis. To better understand the basis for DIFA, we conducted an empirical study in which we measured the strength of information flows identified by DIFA, using information theoretic and correlation-based methods. The results indicate that in most cases the occurrence of a chain of dynamic program dependences between two variables does not indicate a measurable information flow between them. We also explored the relationship between the strength of an information flow and the length of the corresponding dependence chain, and we obtained results indicating that no consistent relationship exists between the length of an information flow and its strength. Finally, we investigated whether data dependence and control dependence makes equal or unequal contributions to flow strength. The results indicate that flows due to data dependences alone are stronger, on average, than flows due to control dependences alone. We present the details of our study and consider the implications of the results for applications of DIFA and related techniques.
Information & Software Technology | 2009
Wes Masri; Andy Podgurski
A new approach to dynamic information flow analysis (DIFA) is presented, and its applications to intrusion detection, software testing and program debugging are discussed. The approach is based on a new forward-computing algorithm that enables online analysis when fast response is not critical. A new forward-computing algorithm for dynamic slicing is also presented, which is more precise than previous forward-computing algorithms and is not restricted to programs with structured control flow. The DIFA and slicing algorithms both rely on a new, precise direct dynamic control dependence algorithm, which requires only constant time per program action. The correctness of this algorithm depends on special, graph-theoretic properties of control dependence, which are established here. A tool called DynFlow is described that implements the proposed approach in order to support analysis of Java byte code programs, and two case studies are presented to illustrate how DynFlow can be used to detect and debug insecure flows. Finally, since dynamic analysis alone is inherently unable to detect implicit information flows, an extension to our approach is described that enables it to detect most implicit information flows at runtime.
ACM Sigsoft Software Engineering Notes | 2005
Wes Masri; Andy Podgurski
This paper presents a new approach to using dynamic information flow analysis to detect attacks against application software. The approach can be used to reveal and, under some conditions, to prevent attacks that violate a specified information flow policy or exhibit a known information flow signature. When used in conjunction with automatic cluster analysis, the approach can also reveal novel attacks that exhibit unusual patterns of information flows. A set of prototype tools implementing the approach have been developed for Java byte code programs. Case studies in which this approach was applied to several subject programs are described.
Computers & Security | 2008
Wes Masri; Andy Podgurski
This paper presents a new approach to detecting software security failures, whose primary goal is facilitating identification and repair of security vulnerabilities rather than permitting online response to attacks. The approach is based on online capture of executions and offline execution replay, profiling, and analysis. It employs fine-grained dynamic information flow analysis in conjunction with anomaly detection. This approach, which we call information flow anomaly detection, is capable of detecting a variety of security failures, including both ones that involve violations of confidentiality or integrity requirements and ones that do not. A prototype tool called DynFlow implementing the approach has been developed for use with Java byte code programs. To illustrate the potential of the approach, it is applied to detect security failures of four open source systems. Also, its effectiveness is compared to the effectiveness of an approach to anomaly detection that is based on analyzing method call stacks.