Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arvind Narayanan is active.

Publication


Featured researches published by Arvind Narayanan.


ieee symposium on security and privacy | 2009

De-anonymizing Social Networks

Arvind Narayanan; Vitaly Shmatikov

Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy sybil nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversarys auxiliary information is small.


computer and communications security | 2005

Fast dictionary attacks on passwords using time-space tradeoff

Arvind Narayanan; Vitaly Shmatikov

Human-memorable passwords are a mainstay of computer security. To decrease vulnerability of passwords to brute-force dictionary attacks, many organizations enforce complicated password-creation rules and require that passwords include numerals and special characters. We demonstrate that as long as passwords remain human-memorable, they are vulnerable to smart-dictionary attacks even when the space of potential passwords is large.Our first insight is that the distribution of letters in easy-to-remember passwords is likely to be similar to the distribution of letters in the users native language. Using standard Markov modeling techniques from natural language processing, this can be used to dramatically reduce the size of the password space to be searched. Our second contribution is an algorithm for efficient enumeration of the remaining password space. This allows application of time-space tradeoff techniques, limiting memory accesses to a relatively small table of partial dictionary sizes and enabling a very fast dictionary attack.We evaluated our method on a database of real-world user password hashes. Our algorithm successfully recovered 67.6% of the passwords using a 2 x 109 search space. This is a much higher percentage than Oechslins rainbow attack, which is the fastest currently known technique for searching large keyspaces. These results call into question viability of human-memorable character-sequence passwords as an authentication mechanism.


Communications of The ACM | 2010

Myths and fallacies of "Personally Identifiable Information"

Arvind Narayanan; Vitaly Shmatikov

Developing effective privacy protection technologies is a critical challenge for security and privacy research as the amount and variety of data collected about individuals increase exponentially.


ieee symposium on security and privacy | 2011

You Might Also Like: Privacy Risks of Collaborative Filtering

Joseph A. Calandrino; Ann Kilzer; Arvind Narayanan; Edward W. Felten; Vitaly Shmatikov

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customers transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.


international symposium on neural networks | 2011

Link prediction by de-anonymization: How We Won the Kaggle Social Network Challenge

Arvind Narayanan; Elaine Shi; Benjamin I. P. Rubinstein

This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction—the latter is required to achieve good performance on the portion of the test set not de-anonymized—for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.


computer and communications security | 2005

Obfuscated databases and group privacy

Arvind Narayanan; Vitaly Shmatikov

We investigate whether it is possible to encrypt a database and then give it away in such a form that users can still access it, but only in a restricted way. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, we aim to restrict the set of queries that can be feasibly evaluated on the encrypted database.We start with a simple form of database obfuscation which makes database records indistinguishable from lookup functions. The only feasible operation on an obfuscated record is to look up some attribute Y by supplying the value of another attribute X that appears in the same record (i.e., someone who does not know X cannot feasibly retrieve Y). We then (i) generalize our construction to conjunctions of equality tests on any attributes of the database, and (ii) achieve a new property we call group privacy. This property ensures that it is easy to retrieve individual records or small subsets of records from the encrypted database by identifying them precisely, but ``mass harvesting queries matching a large number of records are computationally infeasible.Our constructions are non-interactive. The database is transformed in such a way that all queries except those explicitly allowed by the privacy policy become computationally infeasible, i.e.,, our solutions do not rely on any access-control software or hardware.


arXiv: Cryptography and Security | 2006

How To Break Anonymity of the Netflix Prize Dataset

Arvind Narayanan; Vitaly Shmatikov


arXiv: Computers and Society | 2012

A Critical Look at Decentralized Personal Data Architectures

Arvind Narayanan; Vincent Toubiana; Solon Barocas; Helen Nissenbaum; Dan Boneh


IACR Cryptology ePrint Archive | 2006

On the Limits of Point Function Obfuscation.

Arvind Narayanan; Vitaly Shmatikov


Archive | 2009

Data privacy: the non-interactive setting

Vitaly Shmatikov; Arvind Narayanan

Collaboration


Dive into the Arvind Narayanan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann Kilzer

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge