V. Benjamin Livshits
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by V. Benjamin Livshits.
ieee symposium on security and privacy | 2010
Leo A. Meyerovich; V. Benjamin Livshits
Much of the power of modern Web comes from the ability of a Web page to combine content and JavaScript code from disparate servers on the same page. While the ability to create such mash-ups is attractive for both the user and the developer because of extra functionality, code inclusion effectively opens the hosting site up for attacks and poor programming practices within every JavaScript library or API it chooses to use. In other words, expressiveness comes at the price of losing control. To regain the control, it is therefore valuable to provide means for the hosting page to restrict the behavior of the code that the page may include. This paper presents ConScript, a client-side advice implementation for security, built on top of Internet Explorer 8. ConScript allows the hosting page to express fine-grained application-specific security policies that are enforced at runtime. In addition to presenting 17 widely-ranging security and reliability policies that ConScript enables, we also show how policies can be generated automatically through static analysis of server-side code or runtime analysis of client-side code. We also present a type system that helps ensure correctness of ConScript policies. To show the practicality of ConScript in a range of settings, we compare the overhead of ConScript enforcement and conclude that it is significantly lower than that of other systems proposed in the literature, both on micro-benchmarks as well as large, widely-used applications such as MSN, GMail, Google Maps, and Live Desktop.
programming language design and implementation | 2009
V. Benjamin Livshits; Aditya V. Nori; Sriram K. Rajamani; Anindya Banerjee
The last several years have seen a proliferation of static and runtime analysis tools for finding security violations that are caused by explicit information flow in programs. Much of this interest has been caused by the increase in the number of vulnerabilities such as cross-site scripting and SQL injection. In fact, these explicit information flow vulnerabilities commonly found in Web applications now outnumber vulnerabilities such as buffer overruns common in type-unsafe languages such as C and C++. Tools checking for these vulnerabilities require a specification to operate. In most cases the task of providing such a specification is delegated to the user. Moreover, the efficacy of these tools is only as good as the specification. Unfortunately, writing a comprehensive specification presents a major challenge: parts of the specification are easy to miss, leading to missed vulnerabilities; similarly, incorrect specifications may lead to false positives. This paper proposes Merlin, a new approach for automatically inferring explicit information flow specifications from program code. Such specifications greatly reduce manual labor, and enhance the quality of results, while using tools that check for security violations caused by explicit information flow. Beginning with a data propagation graph, which represents interprocedural flow of information in the program, Merlin aims to automatically infer an information flow specification. Merlin models information flow paths in the propagation graph using probabilistic constraints. A naive modeling requires an exponential number of constraints, one per path in the propagation graph. For scalability, we approximate these path constraints using constraints on chosen triples of nodes, resulting in a cubic number of constraints. We characterize this approximation as a probabilistic abstraction, using the theory of probabilistic refinement developed by McIver and Morgan. We solve the resulting system of probabilistic constraints using factor graphs, which are a well-known structure for performing probabilistic inference. We experimentally validate the Merlin approach by applying it to 10 large business-critical Web applications that have been analyzed with CAT.NET, a state-of-the-art static analysis tool for .NET. We find a total of 167 new confirmed specifications, which result in a total of 322 additional vulnerabilities across the 10 benchmarks. More accurate specifications also reduce the false positive rate: in our experiments, Merlin-inferred specifications result in 13 false positives being removed; this constitutes a 15% reduction in the CAT.NET false positive rate on these 10 programs. The final false positive rate for CAT.NET after applying Merlin in our experiments drops to under 1%.
partial evaluation and semantic-based program manipulation | 2008
Monica S. Lam; Michael C. Martin; V. Benjamin Livshits; John Whaley
SQL injection and cross-site scripting are two of the most common security vulnerabilities that plague web applications today. These and many others result from having unchecked data input reach security-sensitive operations. This paper describes a language called PQL (Program Query Language) that allows users to declare to specify information flow patterns succinctly and declaratively. We have developed a static context-sensitive, but flow-insensitive information flow tracking analysis that can be used to find all the vulnerabilities in a program. In the event that the analysis generates too many warnings, the result can be used to drive a model-checking system to analyze more precisely. Model checking is also used to automatically generate the input vectors that expose the vulnerability. Any remaining behavior these static analyses have not isolated may be checked dynamically. The results of the static analyses may be used to optimize these dynamic checks. Our experimental results indicate the language is expressive enough for describing a large number of vulnerabilities succinctly. We have analyzed over nine applications, detecting 30 serious security vulnerabilities. We were also able to automatically recover from attacks as they occurred using the dynamic checker.
computer and communications security | 2009
K. Vikram; Abhishek Prateek; V. Benjamin Livshits
Rich Internet applications are becoming increasingly distributed, as demonstrated by the popularity of AJAX or Web 2.0 applications such as Facebook, Google Maps, Hotmail and many others. A typical multi-tier AJAX application consists, at least, of a server-side component implemented in Java J2EE, PHP or ASP.NET and a client-side component running JavaScript. The resulting application is more responsive because computation has moved closer to the client, avoiding unnecessary network round trips for frequent user actions. However, once a portion of the code has moved to the client, a malicious user can subvert the client side of the computation, jeopardizing the integrity of the server-side state. In this paper we propose Ripley, a system that uses replicated execution to automatically preserve the integrity of a distributed computation. Ripley replicates a copy of the client-side computation on the trusted server tier. Every client-side event is transferred to the replica of the client for execution. Ripley observes results of the computation, both as computed on the client-side and on the server side using the replica of the client-side code. Any discrepancy is flagged as a potential violation of computational integrity. We built Ripley on top of Volta, a distributing compiler that translates .NET applications into JavaScript, effectively providing a measure of security by construction for Volta applications. We have evaluated the Ripley approach on 5 representative AJAX applications built in Volta and also Hotmail, a large widely-used AJAX application. Our results so far suggest that Ripley provides a promising strategy for building secure distributed web applications, which places minimal burden on the application developer at the cost of a low performance overhead.
acm workshop on programming languages and analysis for security | 2007
V. Benjamin Livshits; Úlfar Erlingsson
In recent years, the security landscape has changed, with Web applications vulnerabilities becoming more prominent that vulnerabilities stemming from the lack of type safety, such as buffer overruns. Many reports point to code injection attacks such as cross-site scripting and RSS injection as being the most common attacks against Web applications to date. With Web 2.0 existing security problems are further exacerbated by the advent of Ajax technology that allows one to create and compose HTML content from different sources within the browser at runtime, as exemplified by customizable mashup pages like My Yahoo! or Live.com This paper proposes a simple to support, yet a powerful scheme for eliminating a wide range of script injection vulnerabilities in applications built on top of popular Ajax development frameworks such as the Dojo Toolkit, prototype.js, and AJAX.NET. Unlike other client-side runtime enforcement proposals, the approach we are advocating requires only minor browser modifications. This is because our proposal can be viewed as a natural finer-grained extension of the same-origin policy for JavaScript already supported by the majority of mainstream browsers, in which we treat individual user interface widgets as belonging to separate domains Fortunately, in many cases no changes to the development process need to take place: for applications that are built on top of frameworks described above, a slight framework modification will result in appropriate changes in the generated HTML, completely obviating the need for manual code annotation. In this paper we demonstrate how these changes can prevent cross-site scripting and RSS injection attacks using the Dojo Toolkit, a popular Ajax library, as an example.
foundations of software engineering | 2008
V. Benjamin Livshits; Emre Kiciman
Modern Web 2.0 applications, such as GMail, Live Maps, Face-book and many others, use a combination of Dynamic HTML, JavaScript and other Web browser technologies commonly referred to as AJAX to push application execution to the client web browser. This improves the responsiveness of these network-bound applications, but the shift of application execution from a back-end server to the client also often dramatically increases the amount of code that must first be downloaded to the browser. This creates an unfortunate Catch-22: to create responsive distributed Web 2.0 applications developers move code to the client, but for an application to be responsive, the code must first be transferred there, which takes time. In this paper, we present Doloto, an optimization tool for Web 2.0 applications. Doloto analyzes application workloads and automatically rewrites the existing application code to introduce dynamic code loading. After being processed by Doloto, an application will initially transfer only the portion of code necessary for application initialization. The rest of the applications code is replaced by short stubs---their actual implementations are transfered lazily in the background or, at the latest, on-demand on first execution of a particular application feature. Moreover, code that is rarely executed is rarely downloaded to the user browser. Because Doloto significantly speeds up the application startup and since subsequent code download is interleaved with application execution, applications rewritten with Doloto appear much more responsive to the end-user. To demonstrate the effectiveness of Doloto in practice, we have performed experiments on five large widely-used Web 2.0 applications. Doloto reduces the size of application code download by hundreds of kilobytes or as much as 50% of the original download size. The time to download and begin interacting with large applications is reduced by 20--40% depending on the application and wide-area network conditions. Doloto especially shines on wireless and mobile connections, which are becoming increasingly important in todays computing environments.
ACM Transactions on The Web | 2010
Emre Kiciman; V. Benjamin Livshits
The rise of the software-as-a-service paradigm has led to the development of a new breed of sophisticated, interactive applications often called Web 2.0. While Web applications have become larger and more complex, Web application developers today have little visibility into the end-to-end behavior of their systems. This article presents AjaxScope, a dynamic instrumentation platform that enables cross-user monitoring and just-in-time control of Web application behavior on end-user desktops. AjaxScope is a proxy that performs on-the-fly parsing and instrumentation of JavaScript code as it is sent to users’ browsers. AjaxScope provides facilities for distributed and adaptive instrumentation in order to reduce the client-side overhead, while giving fine-grained visibility into the code-level behavior of Web applications. We present a variety of policies demonstrating the power of AjaxScope, ranging from simple error reporting and performance profiling to more complex memory leak detection and optimization analyses. We also apply our prototype to analyze the behavior of over 90 Web 2.0 applications and sites that use significant amounts of JavaScript.
european conference on computer systems | 2009
Alexander Rasmussen; Emre Kiciman; V. Benjamin Livshits; Madanlal Musuvathi
The backends of todays Internet services rely heavily on caching at various layers both to provide faster service to common requests and to reduce load on back-end components. Cache placement is especially challenging given the diversity of workloads handled by widely deployed Internet services. This paper presents TOOL, an analysis technique that automatically optimizes cache placement. Our experiments have shown that near-optimal cache placements vary significantly based on input distribution.
symposium on cloud computing | 2010
Emre Kiciman; V. Benjamin Livshits; Madanlal Musuvathi; Kevin C. Webb
Over the last 10-15 years, our industry has developed and deployed many large-scale Internet services, from e-commerce to social networking sites, all facing common challenges in latency, reliability, and scalability. Over time, a relatively small number of architectural patterns have emerged to address these challenges, such as tiering, caching, partitioning, and pre- or post-processing compute intensive tasks. Unfortunately, following these patterns requires developers to have a deep understanding of the trade-offs involved in these patterns as well as an end-to-end understanding of their own system and its expected workloads. The result is that non-expert developers have a hard time applying these patterns in their code, leading to low-performing, highly suboptimal applications. In this paper, we propose FLUXO, a system that separates an Internet services logical functionality from the architectural decisions made to support performance, scalability, and reliability. FLUXO achieves this separation through the use of a restricted programming language designed 1) to limit a developers ability to write programs that are incompatible with widely used Internet service architectural patterns; and 2) to simplify the analysis needed to identify how architectural patterns should be applied to programs. Because architectural patterns are often highly dependent on application performance, workloads and data distributions, our platform captures such data as a runtime profile of the application and makes it available for use when determining how to apply architectural patterns. This separation makes service development accessible to non-experts by allowing them to focus on application features and leaving complicated architectural optimizations to experts writing application-agnostic, profile-guided optimization tools. To evaluate FLUXO, we show how a variety of architectural patterns can be expressed as transformations applied to FLUXO programs. Even simple heuristics for automatically applying these optimizations can show reductions in latency ranging from 20-90% without requiring special effort from the application developer. We also demonstrate how a simple shared-nothing tiering and replication pattern is able to scale our test suite, a web-based IM, email, and addressbook application.
usenix security symposium | 2009
Paruj Ratanaworabhan; V. Benjamin Livshits; Benjamin G. Zorn