Warren He
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Warren He.
security and privacy in smartphones and mobile devices | 2012
David Kantola; Erika Chin; Warren He; David A. Wagner
The complexity of Androids message-passing system has led to numerous vulnerabilities in third-party applications. Many of these vulnerabilities are a result of developers confusing inter-application and intra-application communication mechanisms. Consequently, we propose modifications to the Android platform to detect and protect inter-application messages that should have been intra-application messages. Our approach automatically reduces attack surfaces in legacy applications. We describe our implementation for these changes and evaluate it based on the attack surface reduction and the extent to which our changes break compatibility with a large set of popular applications. We fix 100% of intra-application vulnerabilities found in our previous work, which represents 31.4% of the total security flaws found in that work. Furthermore, we find that 99.4% and 93.0% of Android applications are compatible with our sending and receiving changes, respectively.
computer and communications security | 2014
Warren He; Devdatta Akhawe; Sumeet Jain; Elaine Shi; Dawn Song
A number of recent research and industry proposals discussed using encrypted data in web applications. We first present a systematization of the design space of web applications and highlight the advantages and limitations of current proposals. Next, we present ShadowCrypt, a previously unexplored design point that enables encrypted input/output without trusting any part of the web applications. ShadowCrypt allows users to transparently switch to encrypted input/output for text-based web applications. ShadowCrypt runs as a browser extension, replacing input elements in a page with secure, isolated shadow inputs and encrypted text with secure, isolated cleartext. ShadowCrypts key innovation is the use of Shadow DOM, an upcoming primitive that allows low-overhead isolation of DOM trees. Evaluation results indicate that ShadowCrypt has low overhead and of practical use today. Finally, based on our experience with ShadowCrypt, we present a study of 17 popular web applications, across different domains, and the functionality impact and security advantages of encrypting the data they handle.
european symposium on research in computer security | 2013
Devdatta Akhawe; Frank Li; Warren He; Prateek Saxena; Dawn Song
Rich client-side applications written in HTML5 proliferate on diverse platforms, access sensitive data, and need to maintain data-confinement invariants. Applications currently enforce these invariants using implicit, ad-hoc mechanisms. We propose a new primitive called a data-confined sandbox or DCS. A DCS enables complete mediation of communication channels with a small TCB. Our primitive extends currently standardized primitives and has negligible performance overhead and a modest compatibility cost. We retrofit our design on four real-world HTML5 applications and demonstrate that a small amount of effort enables strong data-confinement guarantees.
arXiv: Cryptography and Security | 2016
Mitar Milutinovic; Warren He; Howard Wu; Maxinder Kanwal
In the paper, we present designs for multiple blockchain consensus primitives and a novel blockchain system, all based on the use of trusted execution environments (TEEs), such as Intel SGX-enabled CPUs. First, we show how using TEEs for existing proof of work schemes can make mining equitably distributed by preventing the use of ASICs. Next, we extend the design with proof of time and proof of ownership consensus primitives to make mining energy- and time-efficient. Further improving on these designs, we present a blockchain using a proof of luck consensus protocol. Our proof of luck blockchain uses a TEE platforms random number generation to choose a consensus leader, which offers low-latency transaction validation, deterministic confirmation time, negligible energy consumption, and equitably distributed mining. Lastly, we discuss a potential protection against up to a constant number of compromised TEEs.
international joint conference on artificial intelligence | 2018
Chaowei Xiao; Bo Li; Jun-Yan Zhu; Warren He; Mingyan Liu; Dawn Song
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
privacy enhancing technologies | 2018
Michael Freyberger; Warren He; Devdatta Akhawe; Michelle L. Mazurek; Prateek Mittal
Abstract An important line of privacy research is investigating the design of systems for secure input and output (I/O) within Internet browsers. These systems would allow for users’ information to be encrypted and decrypted by the browser, and the specific web applications will only have access to the users’ information in encrypted form. The state-of-the-art approach for a secure I/O system within Internet browsers is a system called ShadowCrypt created by UC Berkeley researchers [23]. This paper will explore the limitations of ShadowCrypt in order to provide a foundation for the general principles that must be followed when designing a secure I/O system within Internet browsers. First, we developed a comprehensive UI attack that cannot be mitigated with popular UI defenses, and tested the efficacy of the attack through a user study administered on Amazon Mechanical Turk. Only 1 of the 59 participants who were under attack successfully noticed the UI attack, which validates the stealthiness of the attack. Second, we present multiple attack vectors against Shadow-Crypt that do not rely upon UI deception. These attack vectors expose the privacy weaknesses of Shadow DOM—the key browser primitive leveraged by ShadowCrypt. Finally, we present a sketch of potential countermeasures that can enable the design of future secure I/O systems within Internet browsers.
european conference on computer vision | 2018
Arjun Nitin Bhagoji; Warren He; Bo Li; Dawn Song
Existing black-box attacks on deep neural networks (DNNs) have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We carry out a thorough comparative evaluation of black-box attacks and show that Gradient Estimation attacks achieve attack success rates similar to state-of-the-art white-box attacks on the MNIST and CIFAR-10 datasets. We also apply the Gradient Estimation attacks successfully against real-world classifiers hosted by Clarifai. Further, we evaluate black-box attacks against state-of-the-art defenses based on adversarial training and show that the Gradient Estimation attacks are very effective even against these defenses.
usenix security symposium | 2014
Zhiwei Li; Warren He; Devdatta Akhawe; Dawn Song
arXiv: Learning | 2017
Warren He; James Wei; Xinyun Chen; Nicholas Carlini; Dawn Song
international conference on learning representations | 2018
Chaowei Xiao; Jun-Yan Zhu; Bo Li; Warren He; Mingyan Liu; Dawn Song