Battista Biggio
University of Cagliari
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Battista Biggio.
european conference on machine learning | 2013
Battista Biggio; Igino Corona; Davide Maiorca; Blaine Nelson; Nedim Šrndić; Pavel Laskov; Giorgio Giacinto; Fabio Roli
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attackers knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
IEEE Transactions on Knowledge and Data Engineering | 2014
Battista Biggio; Giorgio Fumera; Fabio Roli
Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifiers behavior in adversarial environments, and lead to better design choices.
International Journal of Machine Learning and Cybernetics | 2010
Battista Biggio; Giorgio Fumera; Fabio Roli
Pattern recognition systems are increasingly being used in adversarial environments like network intrusion detection, spam filtering and biometric authentication and verification systems, in which an adversary may adaptively manipulate data to make a classifier ineffective. Current theory and design methods of pattern recognition systems do not take into account the adversarial nature of such kind of applications. Their extension to adversarial settings is thus mandatory, to safeguard the security and reliability of pattern recognition systems in adversarial environments. In this paper we focus on a strategy recently proposed in the literature to improve the robustness of linear classifiers to adversarial data manipulation, and experimentally investigate whether it can be implemented using two well known techniques for the construction of multiple classifier systems, namely, bagging and the random subspace method. Our results provide some hints on the potential usefulness of classifier ensembles in adversarial classification tasks, which is different from the motivations suggested so far in the literature.
IET Biometrics | 2012
Battista Biggio; Zahid Akhtar; Giorgio Fumera; Gian Luca Marcialis; Fabio Roli
Multimodal biometric systems are commonly believed to be more robust to spoofing attacks than unimodal systems, as they combine information coming from different biometric traits. Recent work has shown that multimodal systems can be misled by an impostor even by spoofing only one biometric trait. This result was obtained under a `worst-case` scenario, by assuming that the distribution of fake scores is identical to that of genuine scores (i.e. the attacker is assumed to be able to perfectly replicate a genuine biometric trait). This assumption also allows one to evaluate the robustness of score fusion rules against spoofing attacks, and to design robust fusion rules, without the need of actually fabricating spoofing attacks. However, whether and to what extent the `worst-case` scenario is representative of real spoofing attacks is still an open issue. In this study, we address this issue by an experimental investigation carried out on several data sets including real spoofing attacks, related to a multimodal verification system based on face and fingerprint biometrics. On the one hand, our results confirm that multimodal systems are vulnerable to attacks against a single biometric trait. On the other hand, they show that the `worst-case` scenario can be too pessimistic. This can lead to two conservative choices, if the `worst-case` assumption is used for designing a robust multimodal system. Therefore developing methods for evaluating the robustness of multimodal systems against spoofing attacks, and for designing robust ones, remain a very relevant open issue.
IEEE Transactions on Systems, Man, and Cybernetics | 2016
Fei Zhang; Patrick P. K. Chan; Battista Biggio; Daniel S. Yeung; Fabio Roli
Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversarys data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.
Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop | 2014
Battista Biggio; Konrad Rieck; Davide Ariu; Christian Wressnegger; Igino Corona; Giorgio Giacinto; Fabio Roli
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Proceedings of the 2013 ACM workshop on Artificial intelligence and security | 2013
Battista Biggio; Ignazio Pillai; Samuel Rota Bulò; Davide Ariu; Marcello Pelillo; Fabio Roli
Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities. However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process itself. Whether clustering can be safely adopted in such settings remains thus questionable. In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversarys goal, knowledge of the attacked system, and capabilities of manipulating the input data. We show that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters. We present a case study on single-linkage hierarchical clustering, and report experiments on clustering of malware samples and handwritten digits.
International Journal of Pattern Recognition and Artificial Intelligence | 2014
Battista Biggio; Giorgio Fumera; Fabio Roli
We analyze the problem of designing pattern recognition systems in adversarial settings, under an engineering viewpoint, motivated by their increasing exploitation in security-sensitive applications like spam and malware detection, despite their vulnerability to potential attacks has not yet been deeply understood. We first review previous work and report examples of how a complex system may be evaded either by leveraging on trivial vulnerabilities of its untrained components, e.g. parsing errors in the pre-processing steps, or by exploiting more subtle vulnerabilities of learning algorithms. We then discuss the need of exploiting both reactive and proactive security paradigms complementarily to improve the security by design. Our ultimate goal is to provide some useful guidelines for improving the security of pattern recognition in adversarial settings, and to suggest related open issues to foster research in this area.
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition | 2008
Battista Biggio; Giorgio Fumera; Fabio Roli
In many security applications a pattern recognition system faces an adversarial classification problem, in which an intelligent, adaptive adversary modifies patterns to evade the classifier. Several strategies have been recently proposed to make a classifier harder to evade, but they are based only on qualitative and intuitive arguments. In this work, we consider a strategy consisting in hiding information about the classifier to the adversary through the introduction of some randomness in the decision function. We focus on an implementation of this strategy in a multiple classifier system, which is a classification architecture widely used in security applications. We provide a formal support to this strategy, based on an analytical framework for adversarial classification problems recently proposed by other authors, and give an experimental evaluation on a spam filtering task to illustrate our findings.
international conference on multiple classifier systems | 2007
Battista Biggio; Giorgio Fumera; Fabio Roli
A new theoretical framework for the analysis of linear combiners is presented in this paper. This framework extends the scope of previous analytical models, and provides some new theoretical results which improve the understanding of linear combiners operation. In particular, we show that the analytical model developed in seminal works by Tumer and Ghosh is included in this framework.