Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin Hsu is active.

Publication


Featured researches published by Justin Hsu.


symposium on principles of programming languages | 2013

Linear dependent types for differential privacy

Marco Gaboardi; Andreas Haeberlen; Justin Hsu; Arjun Narayan; Benjamin C. Pierce

Differential privacy offers a way to answer queries about sensitive information while providing strong, provable privacy guarantees, ensuring that the presence or absence of a single individual in the database has a negligible statistical effect on the querys result. Proving that a given query has this property involves establishing a bound on the querys sensitivity---how much its result can change when a single record is added or removed. A variety of tools have been developed for certifying that a given query differentially private. In one approach, Reed and Pierce [34] proposed a functional programming language, Fuzz, for writing differentially private queries. Fuzz uses linear types to track sensitivity and a probability monad to express randomized computation; it guarantees that any program with a certain type is differentially private. Fuzz can successfully verify many useful queries. However, it fails when the sensitivity analysis depends on values that are not known statically. We present DFuzz, an extension of Fuzz with a combination of linear indexed types and lightweight dependent types. This combination allows a richer sensitivity analysis that is able to certify a larger class of queries as differentially private, including ones whose sensitivity depends on runtime information. As in Fuzz, the differential privacy guarantee follows directly from the soundness theorem of the type system. We demonstrate the enhanced expressivity of DFuzz by certifying differential privacy for a broad class of iterative algorithms that could not be typed previously.


ieee computer security foundations symposium | 2014

Differential Privacy: An Economic Method for Choosing Epsilon

Justin Hsu; Marco Gaboardi; Andreas Haeberlen; Sanjeev Khanna; Arjun Narayan; Benjamin C. Pierce; Aaron Roth

Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.


international colloquium on automata languages and programming | 2012

Distributed private heavy hitters

Justin Hsu; Sanjeev Khanna; Aaron Roth

In this paper, we give efficient algorithms and lower bounds for solving the heavy hitters problem while preserving differential privacy in the fully distributed local model. In this model, there are n parties, each of which possesses a single element from a universe of size N. The heavy hitters problem is to find the identity of the most common element shared amongst the n parties. In the local model, there is no trusted database administrator, and so the algorithm must interact with each of the n parties separately, using a differentially private protocol. We give tight information-theoretic upper and lower bounds on the accuracy to which this problem can be solved in the local model (giving a separation between the local model and the more common centralized model of privacy), as well as computationally efficient algorithms even in the case where the data universe N may be exponentially large.


symposium on principles of programming languages | 2015

Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy

Gilles Barthe; Marco Gaboardi; Emilio Jesús Gallego Arias; Justin Hsu; Aaron Roth; Pierre-Yves Strub

Mechanism design is the study of algorithm design where the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property, incentive properties are only useful if the strategic agents also believe this fact. Verification is an attractive way to convince agents that the incentive properties actually hold, but mechanism design poses several unique challenges: interesting properties can be sophisticated relational properties of probabilistic computations involving expected values, and mechanisms may rely on other probabilistic properties, like differential privacy, to achieve their goals. We introduce a relational refinement type system, called HOARe2, for verifying mechanism design and differential privacy. We show that HOARe2 is sound w.r.t. a denotational semantics, and correctly models (epsilon,delta)-differential privacy; moreover, we show that it subsumes DFuzz, an existing linear dependent type system for differential privacy. Finally, we develop an SMT-based implementation of HOARe2 and use it to verify challenging examples of mechanism design, including auctions and aggregative games, and new proposed examples from differential privacy.


symposium on the theory of computing | 2014

Private matchings and allocations

Justin Hsu; Zhiyi Huang; Aaron Roth; Tim Roughgarden; Zhiwei Steven Wu

We consider a private variant of the classical allocation problem: given k goods and n agents with individual, private valuation functions over bundles of goods, how can we partition the goods amongst the agents to maximize social welfare? An important special case is when each agent desires at most one good, and specifies her (private) value for each good: in this case, the problem is exactly the maximum-weight matching problem in a bipartite graph. Private matching and allocation problems have not been considered in the differential privacy literature, and for good reason: they are plainly impossible to solve under differential privacy. Informally, the allocation must match agents to their preferred goods in order to maximize social welfare, but this preference is exactly what agents wish to hide! Therefore, we consider the problem under the relaxed constraint of joint differential privacy: for any agent i, no coalition of agents excluding i should be able to learn about the valuation function of agent i. In this setting, the full allocation is no longer published---instead, each agent is told what good to get. We first show that with a small number of identical copies of each good, it is possible to efficiently and accurately solve the maximum weight matching problem while guaranteeing joint differential privacy. We then consider the more general allocation problem, when bidder valuations satisfy the gross substitutes condition. Finally, we prove that the allocation problem cannot be solved to non-trivial accuracy under joint differential privacy without requiring multiple copies of each type of good.


ieee computer security foundations symposium | 2014

Proving Differential Privacy in Hoare Logic

Gilles Barthe; Marco Gaboardi; Emilio Jesús Gallego Arias; Justin Hsu; César Kunz; Pierre-Yves Strub

Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the programs distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.


international colloquium on automata, languages and programming | 2014

Privately Solving Linear Programs

Justin Hsu; Aaron Roth; Tim Roughgarden; Jonathan Ullman

In this paper, we initiate the systematic study of solving linear programs under differential privacy. The first step is simply to define the problem: to this end, we introduce several natural classes of private linear programs that capture different ways sensitive data can be incorporated into a linear program. For each class of linear programs we give an efficient, differentially private solver based on the multiplicative weights framework, or we give an impossibility result.


logic in computer science | 2016

Proving Differential Privacy via Probabilistic Couplings

Gilles Barthe; Marco Gaboardi; Benjamin Grégoire; Justin Hsu; Pierre-Yves Strub

Over the last decade, differential privacy has achieved widespread adoption within the privacy community. Moreover, it has attracted significant attention from the verification community, resulting in several successful tools for formally proving differential privacy. Although their technical approaches vary greatly, all existing tools rely on reasoning principles derived from the composition theorem of differential privacy. While this suffices to verify most common private algorithms, there are several important algorithms whose privacy analysis does not rely solely on the composition theorem. Their proofs are significantly more complex, and are currently beyond the reach of verification tools.In this paper, we develop compositional methods for formally verifying differential privacy for algorithms whose analysis goes beyond the composition theorem. Our methods are based on deep connections between differential privacy and probabilistic couplings, an established mathematical tool for reasoning about stochastic processes. Even when the composition theorem is not helpful, we can often prove privacy by a coupling argument.We demonstrate our methods on two algorithms: the Exponential mechanism and the Above Threshold algorithm, the critical component of the famous Sparse Vector algorithm. We verify these examples in a relational program logic apRHL+, which can construct approximate couplings. This logic extends the existing apRHL logic with more general rules for the Laplace mechanism and the one-sided Laplace mechanism, and new structural rules enabling pointwise reasoning about privacy; all the rules are inspired by the connection with coupling. While our paper is presented from a formal verification perspective, we believe that its main insight is of independent interest for the differential privacy community.


symposium on discrete algorithms | 2016

Jointly private convex programming

Justin Hsu; Zhiyi Huang; Aaron Roth; Zhiwei Steven Wu

In this paper we present an extremely general method for approximately solving a large family of convex programs where the solution can be divided between different agents, subject to joint differential privacy. This class includes multi-commodity flow problems, general allocation problems, and multi-dimensional knapsack problems, among other examples. The accuracy of our algorithm depends on the \emph{number} of constraints that bind between individuals, but crucially, is \emph{nearly independent} of the number of primal variables and hence the number of agents who make up the problem. As the number of agents in a problem grows, the error we introduce often becomes negligible. We also consider the setting where agents are strategic and have preferences over their part of the solution. For any convex program in this class that maximizes \emph{social welfare}, there is a generic reduction that makes the corresponding optimization \emph{approximately dominant strategy truthful} by charging agents prices for resources as a function of the approximately optimal dual variables, which are themselves computed under differential privacy. Our results substantially expand the class of problems that are known to be solvable under both privacy and incentive constraints.


symposium on principles of programming languages | 2017

Coupling proofs are probabilistic product programs

Gilles Barthe; Benjamin Grégoire; Justin Hsu; Pierre-Yves Strub

Couplings are a powerful mathematical tool for reasoning about pairs of probabilistic processes. Recent developments in formal verification identify a close connection between couplings and pRHL, a relational program logic motivated by applications to provable security, enabling formal construction of couplings from the probability theory literature. However, existing work using pRHL merely shows existence of a coupling and does not give a way to prove quantitative properties about the coupling, needed to reason about mixing and convergence of probabilistic processes. Furthermore, pRHL is inherently incomplete, and is not able to capture some advanced forms of couplings such as shift couplings. We address both problems as follows. First, we define an extension of pRHL, called x-pRHL, which explicitly constructs the coupling in a pRHL derivation in the form of a probabilistic product program that simulates two correlated runs of the original program. Existing verification tools for probabilistic programs can then be directly applied to the probabilistic product to prove quantitative properties of the coupling. Second, we equip x-pRHL with a new rule for while loops, where reasoning can freely mix synchronized and unsynchronized loop iterations. Our proof rule can capture examples of shift couplings, and the logic is relatively complete for deterministic programs. We show soundness of x-PRHL and use it to analyze two classes of examples. First, we verify rapid mixing using different tools from coupling: standard coupling, shift coupling, and path coupling, a compositional principle for combining local couplings into a global coupling. Second, we verify (approximate) equivalence between a source and an optimized program for several instances of loop optimizations from the literature.

Collaboration


Dive into the Justin Hsu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gilles Barthe

Association for Computing Machinery

View shared research outputs
Top Co-Authors

Avatar

Aaron Roth

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Thomas Espitau

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiwei Steven Wu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Shin-ya Katsumata

Research Institute for Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge