Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Blaine Nelson is active.

Publication


Featured researches published by Blaine Nelson.


Machine Learning | 2010

The security of machine learning

Marco Barreno; Blaine Nelson; Anthony D. Joseph; J. D. Tygar

Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.


european conference on machine learning | 2013

Evasion attacks against machine learning at test time

Battista Biggio; Igino Corona; Davide Maiorca; Blaine Nelson; Nedim Šrndić; Pavel Laskov; Giorgio Giacinto; Fabio Roli

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attackers knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.


Springer US | 2009

Misleading Learners: Co-opting Your Spam Filter

Blaine Nelson; Marco Barreno; Fuching Jack Chi; Anthony D. Joseph; Benjamin I. P. Rubinstein; Udam Saini; Charles A. Sutton; J. D. Tygar; Kai Xia

Using statistical machine learning for making security decisions intro- duces new vulnerabilities in large scale systems. We show how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to ren- der it useless—even if the adversarys access is limited to only 1% of the spam training messages. We demonstrate three new attacks that successfully make the filter unusable, prevent victims from receiving specific email messages, and cause spam emails to arrive in the victims inbox.


computer and communications security | 2008

Open problems in the security of learning

Marco Barreno; Peter L. Bartlett; Fuching Jack Chi; Anthony D. Joseph; Blaine Nelson; Benjamin I. P. Rubinstein; Udam Saini; J. D. Tygar

Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.


Neurocomputing | 2015

Support vector machines under adversarial label contamination

Huang Xiao; Battista Biggio; Blaine Nelson; Han Xiao; Claudia Eckert; Fabio Roli

Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood. Intelligent and adaptive attackers may indeed exploit specific vulnerabilities exposed by machine learning techniques to violate system security. Being robust to adversarial data manipulation is thus an important, additional requirement for machine learning algorithms to successfully operate in adversarial settings. In this work, we evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks. In particular, we consider an attacker that aims to maximize the SVM?s classification error by flipping a number of labels in the training data. We formalize a corresponding optimal attack strategy, and solve it by means of heuristic approaches to keep the computational complexity tractable. We report an extensive experimental analysis on the effectiveness of the considered attacks against linear and non-linear SVMs, both on synthetic and real-world datasets. We finally argue that our approach can also provide useful insights for developing more secure SVM learning algorithms, and also novel techniques in a number of related research areas, such as semi-supervised and active learning.


arXiv: Learning | 2014

Security Evaluation of Support Vector Machines in Adversarial Environments

Battista Biggio; Igino Corona; Blaine Nelson; Benjamin I. P. Rubinstein; Davide Maiorca; Giorgio Fumera; Giorgio Giacinto; Fabio Roli

Support vector machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion) or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.


international conference on machine learning | 2007

Revisiting probabilistic models for clustering with pair-wise constraints

Blaine Nelson; Ira Cohen

We revisit recently proposed algorithms for probabilistic clustering with pair-wise constraints between data points. We evaluate and compare existing techniques in terms of robustness to misspecified constraints. We show that the technique that strictly enforces the given constraints, namely the chunklet model, produces poor results even under a small number of misspecified constraints. We further show that methods that penalize constraint violation are more robust to misspecified constraints but have undesirable local behaviors. Based on this evaluation, we propose a new learning technique, extending the chunklet model to allow soft constraints represented by an intuitive measure of confidence in the constraint.


privacy and security issues in data mining and machine learning | 2010

Classifier evasion: models and open problems

Blaine Nelson; Benjamin I. P. Rubinstein; Ling Huang; Anthony D. Joseph; J. D. Tygar

As a growing number of software developers apply machine learning to make key decisions in their systems, adversaries are adapting and launching ever more sophisticated attacks against these systems. The near-optimal evasion problem considers an adversary that searches for a low-cost negative instance by submitting a minimal number of queries to a classifier, in order to effectively evade the classifier. In this position paper, we posit several open problems and alternative variants to the near-optimal evasion problem. Solutions to these problems would significantly advance the state-of-the-art in secure machine learning.


security and artificial intelligence | 2011

Understanding the risk factors of learning in adversarial environments

Blaine Nelson; Battista Biggio; Pavel Laskov

Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.


recent advances in intrusion detection | 2008

Evading Anomaly Detection through Variance Injection Attacks on PCA

Benjamin I. P. Rubinstein; Blaine Nelson; Ling Huang; Anthony D. Joseph; Shing-hon Lau; Nina Taft; J. D. Tygar

Whenever machine learning is applied to security problems, it is important to measure vulnerabilities to adversaries who poison the training data. We demonstrate the impact of variance injection schemes on PCA-based network-wide volume anomaly detectors, when a single compromised PoP injects chaff into the network. These schemes can in- crease the chance of evading detection by sixfold, for DoS attacks.

Collaboration


Dive into the Blaine Nelson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. D. Tygar

University of California

View shared research outputs
Top Co-Authors

Avatar

Ling Huang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pavel Laskov

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Marco Barreno

University of California

View shared research outputs
Top Co-Authors

Avatar

Shing-hon Lau

University of California

View shared research outputs
Top Co-Authors

Avatar

Fabio Roli

University of Cagliari

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge