Jeffrey Pawlick
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey Pawlick.
decision and game theory for security | 2015
Jeffrey Pawlick; Sadegh Farhang; Quanyan Zhu
Access to the cloud has the potential to provide scalable and cost effective enhancements of physical devices through the use of advanced computational processes run on apparently limitless cyber infrastructure. On the other hand, cyber-physical systems and cloud-controlled devices are subject to numerous design challenges; among them is that of security. In particular, recent advances in adversary technology pose Advanced Persistent Threats (APTs) which may stealthily and completely compromise a cyber system. In this paper, we design a framework for the security of cloud-based systems that specifies when a device should trust commands from the cloud which may be compromised. This interaction can be considered as a game between three players: a cloud defender/administrator, an attacker, and a device. We use traditional signaling games to model the interaction between the cloud and the device, and we use the recently proposed FlipIt game to model the struggle between the defender and attacker for control of the cloud. Because attacks upon the cloud can occur without knowledge of the defender, we assume that strategies in both games are picked according to prior commitment. This framework requires a new equilibrium concept, which we call Gestalt Equilibrium, a fixed-point that expresses the interdependence of the signaling and FlipIt games. We present the solution to this fixed-point problem under certain parameter cases, and illustrate an example application of cloud control of an unmanned vehicle. Our results contribute to the growing understanding of cloud-controlled systems.
IEEE Transactions on Information Forensics and Security | 2017
Jeffrey Pawlick; Quanyan Zhu
Advances in computation, sensing, and networking have led to interest in the Internet of Things (IoT) and cyber-physical systems (CPS). Developments concerning the IoT and CPS will improve critical infrastructure, vehicle networks, and personal health products. Unfortunately, these systems are vulnerable to attack. Advanced persistent threats (APTs) are a class of long-term attacks in which well-resourced adversaries infiltrate a network and use obfuscation to remain undetected. In a CPS under APTs, each device must decide whether to trust other components that may be compromised. In this paper, we propose a concept of trust (strategic trust) that uses game theory to capture the adversarial and strategic nature of CPS security. Specifically, we model an interaction between the administrator of a cloud service, an attacker, and a device that decides whether to trust signals from the vulnerable cloud. Our framework consists of a simultaneous signaling game and the FlipIt game. The equilibrium outcome in the signaling game determines the incentives in the FlipIt game. In turn, the equilibrium outcome in the FlipIt game determines the prior probabilities in the signaling game. The Gestalt Nash equilibrium (GNE) characterizes the steady state of the overall macro-game. The novel contributions of this paper include proofs of the existence, uniqueness, and stability of the GNE. We also apply GNEs to strategically design a trust mechanism for a cloud-assisted insulin pump. Without requiring the use of historical data, the GNE obtains a risk threshold beyond which the pump should not trust messages from the cloud. Our framework contributes to a modeling paradigm called games-of-games.
international workshop on information forensics and security | 2016
Jeffrey Pawlick; Quanyan Zhu
Data is the new oil; this refrain is repeated extensively in the age of internet tracking, machine learning, and data analytics. As data collection becomes more personal and pervasive, however, public pressure is mounting for privacy protection. In this atmosphere, developers have created applications to add noise to user attributes visible to tracking algorithms. This creates a strategic interaction between trackers and users when incentives to maintain privacy and improve accuracy are misaligned. In this paper, we conceptualize this conflict through an N + 1-player, augmented Stackelberg game. First a machine learner declares a privacy protection level, and then users respond by choosing their own perturbation amounts. We use the general frameworks of differential privacy and empirical risk minimization to quantify the utility components due to privacy and accuracy, respectively. In equilibrium, each user perturbs her data independently, which leads to a high net loss in accuracy. To remedy this scenario, we show that the learner improves his utility by proactively perturbing the data himself. While other work in this area has studied privacy markets and mechanism design for truthful reporting of user information, we take a different viewpoint by considering both user and learnerperturbation.
arXiv: Cryptography and Security | 2015
Jeffrey Pawlick; Quanyan Zhu
decision and game theory for security | 2017
Jeffrey Pawlick; Quanyan Zhu
Archive | 2017
Jeffrey Pawlick; Quanyan Zhu
arXiv: Cryptography and Security | 2018
Jeffrey Pawlick; Edward Colbert; Quanyan Zhu
arXiv: Cryptography and Security | 2018
Jeffrey Pawlick; Juntao Chen; Quanyan Zhu
ieee global conference on signal and information processing | 2017
Jeffrey Pawlick; Quanyan Zhu
communications and networking symposium | 2017
Jeffrey Pawlick; Quanyan Zhu