Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arunesh Sinha is active.

Publication


Featured researches published by Arunesh Sinha.


ieee computer security foundations symposium | 2011

Regret Minimizing Audits: A Learning-Theoretic Basis for Privacy Protection

Jeremiah Blocki; Nicolas Christin; Anupam Datta; Arunesh Sinha

Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defenders auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender.


principles of security and trust | 2012

Provable de-anonymization of large datasets with sparse dimensions

Anupam Datta; Divya Sharma; Arunesh Sinha

There is a significant body of empirical work on statistical de-anonymization attacks against databases containing micro-data about individuals, e.g., their preferences, movie ratings, or transaction data. Our goal is to analytically explain why such attacks work. Specifically, we analyze a variant of the Narayanan-Shmatikov algorithm that was used to effectively de-anonymize the Netflix database of movie ratings. We prove theorems characterizing mathematical properties of the database and the auxiliary information available to the adversary that enable two classes of privacy attacks. In the first attack, the adversary successfully identifies the individual about whom she possesses auxiliary information (an isolation attack). In the second attack, the adversary learns additional information about the individual, although she may not be able to uniquely identify him (an information amplification attack). We demonstrate the applicability of the analytical results by empirically verifying that the mathematical properties assumed of the database are actually true for a significant fraction of the records in the Netflix movie ratings database, which contains ratings from about 500,000 users.


Archive | 2016

Towards a Science of Security Games

Thanh Hong Nguyen; Debarun Kar; Matthew Brown; Arunesh Sinha; Albert Xin Jiang; Milind Tambe

Security is a critical concern around the world. In many domains from counter-terrorism to sustainability, limited security resources prevent full security coverage at all times; instead, these limited resources must be scheduled, while simultaneously taking into account different target priorities, the responses of the adversaries to the security posture and potential uncertainty over adversary types.


Journal of Cybersecurity | 2015

From physical security to cybersecurity

Arunesh Sinha; Thanh Hong Nguyen; Debarun Kar; Matthew Brown; Milind Tambe; Albert Xin Jiang

Security is a critical concern around the world. In many domains from cybersecurity to sustainability, limited security resources prevent complete security coverage at all times. Instead, these limited resources must be scheduled (or allocated or deployed), while simultaneously taking into account the importance of different targets, the responses of the adversaries to the security posture, and the potential uncertainties in adversary payoffs and observations, etc. Computational game theory can help generate such security schedules. Indeed, casting the problem as a Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for scheduling of security resources. These applications are leading to real-world use-inspired research in the emerging research area of “security games.” The research challenges posed by these applications include scaling up security games to real-world-sized problems, handling multiple types of uncertainty, and dealing with bounded rationality of human adversaries. In cybersecurity domain, the interaction between the defender and adversary is quite complicated with high degree of incomplete information and uncertainty. While solutions have been proposed for parts of the problem space in cybersecurity, the need of the hour is a comprehensive understanding of the whole space including the interaction with the adversary. We highlight the innovations in security games that could be used to tackle the game problem in cybersecurity.


decision and game theory for security | 2012

Audit Mechanisms for Provable Risk Management and Accountable Data Governance

Jeremiah Blocki; Nicolas Christin; Anupam Datta; Arunesh Sinha

Organizations that collect and use large volumes of personal information are expected under the principle of accountable data governance to take measures to protect data subjects from risks that arise from inapproriate uses of this information. In this paper, we focus on a specific class of mechanisms—audits to identify policy violators coupled with punishments—that organizations such as hospitals, financial institutions, and Web services companies may adopt to protect data subjects from privacy and security risks stemming from inappropriate information use by insiders. We model the interaction between the organization (defender) and an insider (adversary) during the audit process as a repeated game. We then present an audit strategy for the defender. The strategy requires the defender to commit to its action and when paired with the adversary’s best response to it, provably yields an asymmetric subgame perfect equilibrium. We then present two mechanisms for allocating the total audit budget for inspections across all games the organization plays with different insiders. The first mechanism allocates budget to maximize the utility of the organization. Observing that this mechanism protects the organization’s interests but may not protect data subjects, we introduce an accountable data governance property, which requires the organization to conduct thorough audits and impose punishments on violators. The second mechanism we present achieves this property. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies and suggest specific studies that can help estimate other parameters. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to adopt accountable data governance practices.


symposium on web systems evolution | 2009

Migrating a Web-based application to a service-based system - an experience report

Pushparani Bhallamudi; Scott R. Tilley; Arunesh Sinha

Service-Oriented Architecture (SOA) is a way of designing, developing, deploying, and managing enterprise systems where business needs and technical solutions are closely aligned. Increased return on investment (ROI) is a prime motivating factor for many organizations to migrate their existing systems to ones based on services. This paper details the experience of migrating a traditional Web-based application to a SOA-based system. The paper focuses on the business and technical issues that motivated the migration, but it also describes the advantages of the new service-oriented system in terms of ROI.


Games | 2016

Keeping Pace with Criminals: An Extended Study of Designing Patrol Allocation against Adaptive Opportunistic Criminals

Chao Zhang; Shahrzad Gholami; Debarun Kar; Arunesh Sinha; Manish Jain; Ripple Goyal; Milind Tambe

Police patrols are used ubiquitously to deter crimes in urban areas. A distinctive feature of urban crimes is that criminals react opportunistically to patrol officers assignments. Compared to strategic attackers (such as terrorists) with a well-laid out plan, opportunistic criminals are less strategic in planning attacks and more flexible in executing them. In this paper, our goal is to recommend optimal police patrolling strategy against such opportunistic criminals. We first build a game-theoretic model that captures the interaction between officers and opportunistic criminals. However, while different models of adversary behavior have been proposed, their exact form remains uncertain. Rather than simply hypothesizing a model as done in previous work, one key contribution of this paper is to learn the model from real-world criminal activity data. To that end, we represent the criminal behavior and the interaction with the patrol officers as parameters of a Dynamic Bayesian Network (DBN), enabling application of standard algorithms such as EM to learn the parameters. Our second contribution is a sequence of modifications to the DBN representation, that allows for a compact representation of the model resulting in better learning accuracy and increased speed of learning of the EM algorithm when used for the modified DBN. These modifications use marginalization approaches and exploit the structure of this problem. Finally, our third contribution is an iterative learning and planning mechanism that keeps updating the adversary model periodically. We demonstrate the efficiency of our learning algorithm by applying it to a real data set of criminal activity obtained from the police department of University of Southern California (USC) situated in Los Angeles, USA. We project a significant reduction in crime rate using our planning strategy as opposed to the actual strategy deployed by the police department. We also demonstrate the improvement in crime prevention in simulations when we use our iterative planning and learning mechanism compared to just learning once and planing. This work was done in collaboration with the police department of USC.


ieee computer security foundations symposium | 2015

Program Actions as Actual Causes: A Building Block for Accountability

Anupam Datta; Deepak Garg; Dilsun Kirli Kaynar; Divya Sharma; Arunesh Sinha

Protocols for tasks such as authentication, electronic voting, and secure multiparty computation ensure desirable security properties if agents follow their prescribed programs. However, if some agents deviate from their prescribed programs and a security property is violated, it is important to hold agents accountable by determining which deviations actually caused the violation. Motivated by these applications, we initiate a formal study of program actions as actual causes. Specifically, we define in an interacting program model what it means for a set of program actions to be an actual cause of a violation. We present a sound technique for establishing program actions as actual causes. We demonstrate the value of this formalism in two ways. First, we prove that violations of a specific class of safety properties always have an actual cause. Thus, our definition applies to relevant security properties. Second, we provide a cause analysis of a representative protocol designed to address weaknesses in the current public key certification infrastructure.


decision and game theory for security | 2016

Data Exfiltration Detection and Prevention: Virtually Distributed POMDPs for Practically Safer Networks

Sara Marie Mc Carthy; Arunesh Sinha; Milind Tambe; Pratyusa K. Manadhata

We address the challenge of detecting and addressing advanced persistent threats APTs in a computer network, focusing in particular on the challenge of detecting data exfiltration over Domain Name System DNS queries, where existing detection sensors are imperfect and lead to noisy observations about the networks security state. Data exfiltration over DNS queries involves unauthorized transfer of sensitive data from an organization to a remote adversary through a DNS data tunnel to a malicious web domain. Given the noisy sensors, previous work has illustrated that standard approaches fail to satisfactorily rise to the challenge of detecting exfiltration attempts. Instead, we propose a decision-theoretic technique that sequentially plans to accumulate evidence under uncertainty while taking into account the cost of deploying such sensors. More specifically, we provide a fast scalable POMDP formulation to address the challenge, where the efficiency of the formulation is based on two key contributions: i we use a virtually distributed POMDP VD-POMDP formulation, motivated by previous work in distributed POMDPs with sparse interactions, where individual policies for different sub-POMDPs are planned separately but their sparse interactions are only resolved at execution time to determine the joint actions to perform; ii we allow for abstraction in planning for speedups, and then use a fast MILP to implement the abstraction while resolving any interactions. This allows us to determine optimal sensing strategies, leveraging information from many noisy detectors, and subject to constraints imposed by network topology, forwarding rules and performance costs on the frequency, scope and efficiency of sensing we can perform.


international joint conference on artificial intelligence | 2017

Don't Bury your Head in Warnings: A Game-Theoretic Approach for Intelligent Allocation of Cyber-security Alerts

Aaron Schlenker; Haifeng Xu; Mina Guirguis; Christopher Kiekintveld; Arunesh Sinha; Milind Tambe; Solomon Y. Sonya; Darryl Balderas; Noah Dunstatter

In recent years, there have been a number of successful cyber attacks on enterprise networks by malicious actors. These attacks generate alerts which must be investigated by cyber analysts to determine if they are an attack. Unfortunately, there are magnitude more alerts than cyber analysts a trend expected to continue into the future creating a need to find optimal assignments of the incoming alerts to analysts in the presence of a strategic adversary. We address this challenge with the four following contributions: (1) a cyber allocation game (CAG) model for the cyber network protection domain, (2) an NP-hardness proof for computing the optimal strategy for the defender, (3) techniques to find the optimal allocation of experts to alerts in CAG in the general case and key special cases, and (4) heuristics to achieve significant scale-up in CAGs with minimal loss in solution quality.

Collaboration


Dive into the Arunesh Sinha's collaboration.

Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Anupam Datta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jeremiah Blocki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Christin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Thanh Hong Nguyen

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Matthew Brown

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Debarun Kar

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chao Zhang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Fei Fang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shahrzad Gholami

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge