Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Z. Berkay Celik is active.

Publication


Featured researches published by Z. Berkay Celik.


ieee european symposium on security and privacy | 2016

The Limitations of Deep Learning in Adversarial Settings

Nicolas Papernot; Patrick D. McDaniel; Somesh Jha; Matthew Fredrikson; Z. Berkay Celik; Ananthram Swami

Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.


computer and communications security | 2017

Practical Black-Box Attacks against Machine Learning

Nicolas Papernot; Patrick D. McDaniel; Ian J. Goodfellow; Somesh Jha; Z. Berkay Celik; Ananthram Swami

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.


ieee symposium on security and privacy | 2016

Machine Learning in Adversarial Settings

Patrick D. McDaniel; Nicolas Papernot; Z. Berkay Celik

Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.


military communications conference | 2015

Malware traffic detection using tamper resistant features

Z. Berkay Celik; Robert J. Walls; Patrick D. McDaniel; Ananthram Swami

This paper presents a framework for evaluating the transport layer feature space of malware heartbeat traffic. We utilize these features in a prototype detection system to distinguish malware traffic from traffic generated by legitimate applications. In contrast to previous work, we eliminate features at risk of producing overly optimistic detection results, detect previously unobserved anomalous behavior, and rely only on tamper-resistant features making it difficult for sophisticated malware to avoid detection. Further, we characterize the evolution of malware evasion techniques over time by examining the behavior of 16 malware families. In particular, we highlight the difficultly of detecting malware that use traffic-shaping techniques to mimic legitimate traffic.


computer and communications security | 2018

Detection under Privileged Information

Z. Berkay Celik; Patrick D. McDaniel; Rauf Izmailov; Nicolas Papernot; Ryan Sheatsley; Raquel Alvarez; Ananthram Swami

For well over a quarter century, detection systems have been driven by models learned from input features collected from real or simulated environments. An artifact (e.g., network event, potential malware sample, suspicious email) is deemed malicious or non-malicious based on its similarity to the learned model at runtime. However, the training of the models has been historically limited to only those features available at runtime. In this paper, we consider an alternate learning approach that trains models using privileged information--features available at training time but not at runtime--to improve the accuracy and resilience of detection systems. In particular, we adapt and extend recent advances in knowledge transfer, model influence, and distillation to enable the use of forensic or other data unavailable at runtime in a range of security domains. An empirical evaluation shows that privileged information increases precision and recall over a system with no privileged information: we observe up to 7.7% relative decrease in detection error for fast-flux bot detection, 8.6% for malware traffic detection, 7.3% for malware classification, and 16.9% for face recognition. We explore the limitations and applications of different privileged information techniques in detection systems. Such techniques provide a new means for detection systems to learn from data that would otherwise not be available at runtime.


The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology | 2018

Malware modeling and experimentation through parameterized behavior

Z. Berkay Celik; Patrick D. McDaniel; Thomas Bowen

Experimentation is critical to understanding the malware operation and to evaluating potential defenses. However, constructing the controlled environments needed for this experimentation is both time-consuming and error-prone. In this study, we highlight several common mistakes made by researchers and conclude that existing evaluations of malware detection techniques often lack in both flexibility and transparency. For instance, we show that small variations in the malware’s behavioral parameters can have a significant impact on the evaluation results. These variations, if unexplored, may lead to overly optimistic conclusions and detection systems that are ineffective in practice. To overcome these issues, we propose a framework to model malware behavior and guide systematic parameter selection. We evaluate our framework using a synthetic botnet executed within the CyberVAN testbed. Our study is intended to foster critical evaluation of proposed detection techniques and stymie unintentionally erroneous experimentation.


military communications conference | 2016

Mapping sample scenarios to operational models

Z. Berkay Celik; Nan Hu; Yun Li; Nicolas Papernot; Patrick D. McDaniel; Robert J. Walls; Jeff Rowe; Karl N. Levitt; Novella Bartolini; Thomas F. La Porta; Ritu Chadha

Achieving mission objectives in complex and increasingly adversarial networks is difficult even under the best of circumstances. Currently, there are few tools for reasoning about how to react to rapid changes in a given networks environmental state; that is, we do not know how to cope with adversarial actions in hostile environments. In this paper, we consider a preliminary operational model that combines the states, detection outputs, and agility maneuvers associated with a cyber-operation in hostile networks. The goal is positing the development of an operational model to aid in the successful completion of mission objectives with a minimal maneuver cost. We present a host remediation case study that explores the efficacy of the proposed model in aiding operation completion.


arXiv: Cryptography and Security | 2016

Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.

Nicolas Papernot; Patrick D. McDaniel; Ian J. Goodfellow; Somesh Jha; Z. Berkay Celik; Ananthram Swami


usenix security symposium | 2011

Salting public traces with attack traffic to test flow classifiers

Z. Berkay Celik; Jayaram Raghuram; George Kesidis; David J. Miller


arXiv: Cryptography and Security | 2016

Extending Detection with Forensic Information

Z. Berkay Celik; Patrick D. McDaniel; Rauf Izmailov; Nicolas Papernot; Ananthram Swami

Collaboration


Dive into the Z. Berkay Celik's collaboration.

Top Co-Authors

Avatar

Patrick D. McDaniel

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Papernot

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

A. Selcuk Uluagac

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Gang Tan

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Hidayet Aksu

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Abbas Acar

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Robert J. Walls

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Ryan Sheatsley

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge