Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Muñoz-González is active.

Publication


Featured researches published by Luis Muñoz-González.


arXiv: Learning | 2017

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

Luis Muñoz-González; Battista Biggio; Ambra Demontis; Andrea Paudice; Vasin Wongrassamee; Emil Lupu; Fabio Roli

A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples). In this work, we first extend the definition of poisoning attacks to multiclass problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.


IEEE Transactions on Dependable and Secure Computing | 2017

Don't fool Me!: Detection, Characterisation and Diagnosis of Spoofed and Masked Events in Wireless Sensor Networks

Vittorio P. Illiano; Luis Muñoz-González; Emil Lupu

Wireless Sensor Networks carry a high risk of being compromised, as their deployments are often unattended, physically accessible and the wireless medium is difficult to secure. Malicious data injections take place when the sensed measurements are maliciously altered to trigger wrong and potentially dangerous responses. When many sensors are compromised, they can collude with each other to alter the measurements making such changes difficult to detect. Distinguishing between genuine and malicious measurements is even more difficult when significant variations may be introduced because of events, especially if more events occur simultaneously. We propose a novel methodology based on wavelet transform to detect malicious data injections, to characterise the responsible sensors, and to distinguish malicious interference from faulty behaviours. The results, both with simulated and real measurements, show that our approach is able to counteract sophisticated attacks, achieving a significant improvement over state-of-the-art approaches.


IEEE Transactions on Dependable and Secure Computing | 2017

Exact Inference Techniques for the Analysis of Bayesian Attack Graphs

Luis Muñoz-González; Daniele Sgandurra; Martín Barrère; Emil Lupu

Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attackers behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.


arXiv: Cryptography and Security | 2017

Efficient Attack Graph Analysis through Approximate Inference

Luis Muñoz-González; Daniele Sgandurra; Andrea Paudice; Emil Lupu

Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system’s components given their vulnerabilities and interconnections and accounts for multi-step attacks spreading through the system. While static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, for example, from Security Information and Event Management software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this article, we show how Loopy Belief Propagation—an approximate inference technique—can be applied to attack graphs and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm’s accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.


Archive | 2019

The Security of Machine Learning Systems

Luis Muñoz-González; Emil Lupu

Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.


ACM Transactions on Sensor Networks | 2018

Determining Resilience Gains From Anomaly Detection for Event Integrity in Wireless Sensor Networks

Vittorio P. Illiano; Andrea Paudice; Luis Muñoz-González; Emil Lupu

Measurements collected in a wireless sensor network (WSN) can be maliciously compromised through several attacks, but anomaly detection algorithms may provide resilience by detecting inconsistencies in the data. Anomaly detection can identify severe threats to WSN applications, provided that there is a sufficient amount of genuine information. This article presents a novel method to calculate an assurance measure for the network by estimating the maximum number of malicious measurements that can be tolerated. In previous work, the resilience of anomaly detection to malicious measurements has been tested only against arbitrary attacks, which are not necessarily sophisticated. The novel method presented here is based on an optimization algorithm, which maximizes the attack’s chance of staying undetected while causing damage to the application, thus seeking the worst-case scenario for the anomaly detection algorithm. The algorithm is tested on a wildfire monitoring WSN to estimate the benefits of anomaly detection on the system’s resilience. The algorithm also returns the measurements that the attacker needs to synthesize, which are studied to highlight the weak spots of anomaly detection. Finally, this article presents a novel methodology that takes in input the degree of resilience required and automatically designs the deployment that satisfies such a requirement.


arXiv: Cryptography and Security | 2016

Automated Dynamic Analysis of Ransomware: Benefits, Limitations and use for Detection.

Daniele Sgandurra; Luis Muñoz-González; Rabih Mohsen; Emil Lupu


arXiv: Machine Learning | 2018

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection.

Andrea Paudice; Luis Muñoz-González; András György; Emil Lupu


arXiv: Cryptography and Security | 2015

Exact Inference Techniques for the Dynamic Analysis of Attack Graphs

Luis Muñoz-González; Daniele Sgandurra; Martín Barrère; Emil Lupu


arXiv: Machine Learning | 2018

Label Sanitization against Label Flipping Poisoning Attacks.

Andrea Paudice; Luis Muñoz-González; Emil Lupu

Collaboration


Dive into the Luis Muñoz-González's collaboration.

Top Co-Authors

Avatar

Emil Lupu

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rabih Mohsen

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Roli

University of Cagliari

View shared research outputs
Researchain Logo
Decentralizing Knowledge