Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ilya Mironov is active.

Publication


Featured researches published by Ilya Mironov.


electronic commerce | 2001

Incentives for sharing in peer-to-peer networks

Philippe Golle; Kevin Leyton-Brown; Ilya Mironov

We consider the free-rider problem that arises in peer-to-peer file sharing networks such as Napster: the problem that individual users are provided with no incentive for adding value to the network. We examine the design implications of the assumption that users will selfishly act to maximize their own rewards, by constructing a formal game theoretic model of the system and analyzing equilibria of user strategies under several novel payment mechanisms. We support and extend upon our theoretical predictions with experimental results from a multi-agent reinforcement learning model.


knowledge discovery and data mining | 2009

Differentially private recommender systems: Building privacy into the Netflix Prize contenders

Frank McSherry; Ilya Mironov

We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individuals data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.


theory and application of cryptographic techniques | 2006

Our data, ourselves: privacy via distributed noise generation

Cynthia Dwork; Krishnaram Kenthapadi; Frank McSherry; Ilya Mironov; Moni Naor

In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form ∑if(di), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.


the cryptographers track at the rsa conference | 2001

Uncheatable Distributed Computations

Philippe Golle; Ilya Mironov

Computationally expensive tasks that can be parallelized are most efficiently completed by distributing the computation among a large number of processors. The growth of the Internet has made it possible to invite the participation of just about any computer in such distributed computations. This introduces the potential for cheating by untrusted participants. In a commercial setting where participants get paid for their contribution, there is incentive for dishonest participants to claim credit for work they did not do. In this paper, we propose security schemes that defend against this threat with very little overhead. Our weaker scheme discourages cheating by ensuring that it does not pay off, while our stronger schemes let participants prove that they have done most of the work they were assigned with high probability.


computer and communications security | 2016

Deep Learning with Differential Privacy

Martín Abadi; Andy Chu; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Kunal Talwar; Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.


international cryptology conference | 2002

Not So) Random Shuffles of RC4

Ilya Mironov

Most guidelines for implementation of the RC4 stream cipher recommend discarding the first 256 bytes of its output. This recommendation is based on the empirical fact that known attacks can either cryptanalyze RC4 starting at any point, or become harmless after these initial bytes are dumped. The motivation for this paper is to find a conservative estimate for the number of bytes that should be discarded in order to be safe. To this end we propose an idealized model of RC4 and analyze it applying the theory of random shuffles. Based on our analysis of the model we recommend dumping at least 512 bytes.


international cryptology conference | 2009

Computational Differential Privacy

Ilya Mironov; Omkant Pandey; Omer Reingold; Salil P. Vadhan

The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient--i.e., computationally-bounded--adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al.[1]. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishability- and simulatability-based) of computational differential privacy. Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard information-theoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentially-private protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.


foundations of computer science | 2010

The Limits of Two-Party Differential Privacy

Andrew McGregor; Ilya Mironov; Toniann Pitassi; Omer Reingold; Kunal Talwar; Salil P. Vadhan

We study differential privacy in a distributed setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the two-party setting and the simpler client-server setting (where privacy guarantees are one-sided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy. The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from Santha-Vazirani sources. A second connection we expose indicates that the ability to approximate a function by a low-error differentially private protocol is strongly related to the ability to approximate it by a low communication protocol. (The connection goes in both directions.)


Lecture Notes in Computer Science | 2001

Incentives for Sharing in Peer-to-Peer Networks

Philippe Golle; Kevin Leyton-Brown; Ilya Mironov; Mark Lillibridge

We consider the free-rider problem in peer-to-peer file sharing networks such as Napster: that individual users are provided with no incentive for adding value to the network. We examine the design implications of the assumption that users will selfishly act to maximize their own rewards, by constructing a formal game theoretic model of the system and analyzing equilibria of user strategies under several novel payment mechanisms. We support and extend this workwith results from experiments with a multi-agent reinforcement learning model.


the cryptographers track at the rsa conference | 2003

A secure signature scheme from bilinear maps

Dan Boneh; Ilya Mironov; Victor Shoup

Traditionally, the strongest notion of security for undeniable and confirmer signatures is invisibility under adaptive attacks. This security property was promoted by Camenisch and Michels and they provided schemes with this property. Gennaro, Krawczyk and Rabin (GKR) developed an RSA-based scheme which is much more efficient than the schemes of Camenisch and Michels, but it does not have invisibility. We give an RSA-based scheme which is as efficient as the GKR scheme, and which has invisibility. We suggest that anonymity is the most relevant security property for undeniable and confirmer signatures. We give a precise definition of anonymity for undeniable and confirmer signatures in the multi-user setting and show that anonymity and invisibility are closely related. Finally, we show that anonymity can be achieved even when the parties use completely different cryptographic primitives.

Collaboration


Dive into the Ilya Mironov's collaboration.

Top Co-Authors

Avatar

Gil Segev

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge