Muath Alkhalaf
University of California, Santa Barbara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Muath Alkhalaf.
tools and algorithms for construction and analysis of systems | 2010
Fang Yu; Muath Alkhalaf; Tevfik Bultan
Stranger is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in PHP applications. Stranger uses symbolic forward and backward reachability analyses to compute the possible values that the string expressions can take during program execution. Stranger can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that characterize all malicious inputs that can be used to generate attacks.
IEEE Computer | 2010
Sylvain Hallé; Tevfik Bultan; Graham Hughes; Muath Alkhalaf; Roger Villemaire
Asynchronous JavaScript and XML (Ajax) is a collection of technologies used to develop rich and interactive Web applications. A typical Ajax client runs locally in the users Web browser and refreshes its interface on the fly in response to user input. Using this method with the AWS-ECS let us automatically generate test sequences and detect two deviations of their service implementation with respect to the online documentation provided, in less than three minutes of testing. We also provided a framework that allows the runtime monitoring of both client and server contract constraints with minimal modification to an existing Ajax application code. Experiments with the Amazon E-Commerce Service demonstrate the advantages of using a model-based approach for the runtime testing and monitoring of Web applications.
international conference on software engineering | 2011
Fang Yu; Muath Alkhalaf; Tevfik Bultan
We present automata-based static string analysis techniques that automatically generate sanitization statements for patching vulnerable web applications. Our approach consists of three phases: Given an attack pattern we first conduct a vulnerability analysis to identify if strings that match the attack pattern can reach the security-sensitive functions. Next, we compute vulnerability signatures that characterize all input strings that can exploit the discovered vulnerability. Given the vulnerability signatures, we then construct sanitization statements that 1) check if a given input matches the vulnerability signature and 2) modify the input in a minimal way so that the modified input does not match the vulnerability signature. Our approach is capable of generating relational vulnerability signatures (and corresponding sanitization statements) for vulnerabilities that are due to more than one input.
automated software engineering | 2009
Fang Yu; Muath Alkhalaf; Tevfik Bultan
Given a program and an attack pattern (specified as a regular expression), we automatically generate string-based vulnerability signatures, i.e., a characterization that includes all malicious inputs that can be used to generate attacks. We use an automata-based string analysis framework. Using forward reachability analysis we compute an over-approximation of all possible values that string variables can take at each program point. Intersecting these with the attack pattern yields the potential attack strings if the program is vulnerable. Using backward analysis we compute an over-approximation of all possible inputs that can generate those attack strings. In addition to identifying existing vulnerabilities and their causes, these vulnerability signatures can be used to filter out malicious inputs. Our approach extends the prior work on automata-based string analysis by providing a backward symbolic analysis that includes a symbolic pre-image computation for deterministic finite automata on common string manipulating functions such as concatenation and replacement.
formal methods | 2014
Fang Yu; Muath Alkhalaf; Tevfik Bultan; Oscar H. Ibarra
Verifying string manipulating programs is a crucial problem in computer security. String operations are used extensively within web applications to manipulate user input, and their erroneous use is the most common cause of security vulnerabilities in web applications. We present an automata-based approach for symbolic analysis of string manipulating programs. We use deterministic finite automata (DFAs) to represent possible values of string variables. Using forward reachability analysis we compute an over-approximation of all possible values that string variables can take at each program point. Intersecting these with a given attack pattern yields the potential attack strings if the program is vulnerable. Based on the presented techniques, we have implemented Stranger, an automata-based string analysis tool for detecting string-related security vulnerabilities in PHP applications. We evaluated Stranger on several open-source Web applications including one with 350,000+ lines of code. Stranger is able to detect known/unknown vulnerabilities, and, after inserting proper sanitization routines, prove the absence of vulnerabilities with respect to given attack patterns.
international conference on service oriented computing | 2009
Sylvain Hallé; Graham Hughes; Tevfik Bultan; Muath Alkhalaf
Interface grammars are a formalism for expressing constraints on sequences of messages exchanged between two components. In this paper, we extend interface grammars with an automated translation of XML Schema definitions present in WSDL documents into interface grammar rules. Given an interface grammar, we can then automatically generate either 1 a parser, to check that a sequence of messages generated by a web service client is correct with respect to the interface specification, or 2) a sentence generator producing compliant message sequences, to check that the web service responds to them according to the interface specification. By doing so, we can validate and generate both messages and sequences of messages in a uniform manner; moreover, we can express constraints where message structure and control flow cannot be handled separately.
international symposium on software testing and analysis | 2012
Muath Alkhalaf; Shauvik Roy Choudhary; Mattia Fazzini; Tevfik Bultan; Alessandro Orso; Christopher Kruegel
Since web applications are easily accessible, and often store a large amount of sensitive user information, they are a common target for attackers. In particular, attacks that focus on input validation vulnerabilities are extremely effective and dangerous. To address this problem, we developed ViewPoints--a technique that can identify erroneous or insufficient validation and sanitization of the user inputs by automatically discovering inconsistencies between client- and server-side input validation functions. Developers typically perform redundant input validation in both the front-end (client) and the back-end (server) components of a web application. Client- side validation is used to improve the responsiveness of the application, as it allows for responding without communicating with the server, whereas server-side validation is necessary for security reasons, as malicious users can easily circumvent client-side checks. ViewPoints (1) automatically extracts client- and server-side input validation functions, (2) models them as deterministic finite automata (DFAs), and (3) compares client- and server-side DFAs to identify and report the inconsistencies between the two sets of checks. Our initial evaluation of the technique is promising: when applied to a set of real-world web applications, ViewPoints was able to automatically identify a large number of inconsistencies in their input validation functions.
international symposium on software testing and analysis | 2014
Muath Alkhalaf; Abdulbaki Aydin; Tevfik Bultan
Correct validation and sanitization of user input is crucial in web applications for avoiding security vulnerabilities and erroneous application behavior. We present an automated differential repair technique for input validation and sanitization functions. Differential repair can be used within an application to repair client and server-side code with respect to each other, or across applications in order to strengthen the validation and sanitization checks. Given a reference and a target function, our differential repair technique strengthens the validation and sanitization operations in the target function based on the reference function. It does this by synthesizing three patches: a validation, a length, and a sanitization patch. Our automated patch synthesis algorithms are based on forward and backward symbolic string analyses that use automata as a symbolic representation. Composition of the three automatically synthesized patches with the original target function results in the repaired function, which provides stronger validation and sanitization than both the target and the reference functions.
international symposium on software testing and analysis | 2008
Graham Hughes; Tevfik Bultan; Muath Alkhalaf
Web services provide a promising framework for developing interoperable software components that interact with each other across organizational boundaries. For this framework to be successful, the client and the server for a service have to interact with each other based on the published service interface specification. If either the client or the server deviate from the interface specification, the client-server interaction will lead to errors. We present a framework for checking interface conformance for web services. Given an interface specification, we automatically generate web service server stubs (for client verification) and drivers (for server verification) and then use these stubs and drivers to check the conformance of the client and server to the interface specification. We implemented this framework by using interface grammars as the interface specification language. We developed an interface compiler that automatically generates a stub or a driver from a given interface grammar. We conducted a case study by applying these techniques to the Amazon E-Commerce Service.
international conference on software engineering | 2012
Muath Alkhalaf; Tevfik Bultan; Jose L. Gallegos
Client-side computation in web applications is becoming increasingly common due to the popularity of powerful client-side programming languages such as JavaScript. Clientside computation is commonly used to improve an applications responsiveness by validating user inputs before they are sent to the server. In this paper, we present an analysis technique for checking if a client-side input validation function conforms to a given policy. In our approach, input validation policies are expressed using two regular expressions, one specifying the maximum policy (the upper bound for the set of inputs that should be allowed) and the other specifying the minimum policy (the lower bound for the set of inputs that should be allowed). Using our analysis we can identify two types of errors 1) the input validation function accepts an input that is not permitted by the maximum policy, or 2) the input validation function rejects an input that is permitted by the minimum policy. We implemented our analysis using dynamic slicing to automatically extract the input validation functions from web applications and using automata-based string analysis to analyze the extracted functions. Our experiments demonstrate that our approach is effective in finding errors in input validation functions that we collected from real-world applications and from tutorials and books for teaching JavaScript.