Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pedro García-Teodoro is active.

Publication


Featured researches published by Pedro García-Teodoro.


Computers & Security | 2009

Anomaly-based network intrusion detection: Techniques, systems and challenges

Pedro García-Teodoro; Jesús E. Díaz-Verdejo; Gabriel Maciá-Fernández; Enrique Vázquez

The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues.


Computer Communications | 2004

Anomaly detection methods in wired networks: a survey and taxonomy

Juan M. Estevez-Tapiador; Pedro García-Teodoro; Jesús E. Díaz-Verdejo

Despite the advances reached along the last 20 years, anomaly detection in network behavior is still an immature technology, and the shortage of commercial tools thus corroborates it. Nevertheless, the benefits which could be obtained from a better understanding of the problem itself as well as the improvement of these mechanisms, especially in network security, justify the demand for more research efforts in this direction. This article presents a survey on current anomaly detection methods for network intrusion detection in classical wired environments. After introducing the problem and elucidating its interest, a taxonomy of current solutions is presented. The outlined scheme allows us to systematically classify current detection methods as well as to study the different facets of the problem. The more relevant paradigms are subsequently discussed and illustrated through several case studies of selected systems developed in the field. The problems addressed by each of them as well as their weakest points are thus explained. Finally, this work concludes with an analysis of the problems that still remain open. Based on this discussion, some research lines are identified.


Computer Networks | 2004

Measuring normality in HTTP traffic for anomaly-based intrusion detection

Juan M. Estevez-Tapiador; Pedro García-Teodoro; Jesús E. Díaz-Verdejo

In this paper, the problem of measuring normality in HTTP traffic for the purpose of anomaly-based network intrusion detection is addressed. The work carried out is expressed in two steps: first, some statistical analysis of both normal and hostile traffic is presented. The experimental results of this study reveal that certain features extracted from HTTP requests can be used to distinguish anomalous (and, therefore, suspicious) traffic from that corresponding to correct, normal connections. The second part of the paper presents a new anomaly-based approach to detect attacks carried out over HTTP traffic. The technique introduced is statistical and makes use of Markov chains to model HTTP network traffic. The incoming HTTP traffic is parameterised for evaluation on a packet payload basis. Thus, the payload of each HTTP request is segmented into a certain number of contiguous blocks, which are subsequently quantized according to a previously trained scalar codebook. Finally, the temporal sequence of the symbols obtained is evaluated by means of a Markov model derived during a training phase. The detection results provided by our approach show important improvements, both in detection ratio and regarding false alarms, in comparison with those obtained using other current techniques.


First IEEE International Workshop on Information Assurance, 2003. IWIAS 2003. Proceedings. | 2003

Stochastic protocol modeling for anomaly based network intrusion detection

Juan M. Estevez-Tapiador; Pedro García-Teodoro; Jesús E. Díaz-Verdejo

A new method for detecting anomalies in the usage of protocols in computer networks is presented. The proposed methodology is applied to TCP and disposed in two steps. First, a quantization of the TCP header space is accomplished, so that a unique symbol is associated with each TCP segment. TCP-based network traffic is thus captured, quantized and represented by a sequence of symbols. The second step in our approach is the modeling of these sequences by means of a Markov chain. The analysis of the model obtained for diverse TCP sources reveals that it captures adequately the essence of the protocol dynamics. Once the model is built it is possible to use it as a representation of the normal usage of the protocol, so that deviations from the behavior provided by the model can be considered as a sign of protocol misusage.


ACM Computing Surveys | 2013

Survey and taxonomy of botnet research through life-cycle

Rafael A. Rodríguez-Gómez; Gabriel Maciá-Fernández; Pedro García-Teodoro

Of all current threats to cybersecurity, botnets are at the top of the list. In consequence, interest in this problem is increasing rapidly among the research community and the number of publications on the question has grown exponentially in recent years. This article proposes a taxonomy of botnet research and presents a survey of the field to provide a comprehensive overview of all these contributions. Furthermore, we hope to provide researchers with a clear perspective of the gaps that remain to be filled in our defenses against botnets. The taxonomy is based upon the botnets life-cycle, defined as the sequence of stages a botnet needs to pass through in order to reach its goal. This approach allows us to consider the problem of botnets from a global perspective, which constitutes a key difference from other taxonomies that have been proposed. Under this novel taxonomy, we conclude that all attempts to defeat botnets should be focused on one or more stages of this life-cycle. In fact, the sustained hindering of any of the stages makes it possible to thwart a botnets progress and thus render it useless. We test the potential capabilities of our taxonomy by means of a survey of current botnet research, and find it genuinely useful in understanding the focus of the different contributions in this field.


IEEE Transactions on Information Forensics and Security | 2009

Mathematical Model for Low-Rate DoS Attacks Against Application Servers

Gabriel Maciá-Fernández; Jesús E. Díaz-Verdejo; Pedro García-Teodoro

In recent years, variants of denial of service (DoS) attacks that use low-rate traffic have been proposed, including the Shrew attack, reduction of quality attacks, and low-rate DoS attacks against application servers (LoRDAS). All of these are flooding attacks that take advantage of vulnerability in the victims for reducing the rate of the traffic. Although their implications and impact have been comprehensively studied, mainly by means of simulation, there is a need for mathematical models by which the behaviour of these sometimes complex processes can be described. In this paper, we propose a mathematical model for the LoRDAS attack. This model allows us to evaluate its performance by relating it to the configuration parameters of the attack and the dynamics of network and victim. The model is validated by comparing the performance values given against those obtained from a simulated environment. In addition, some applicability issues for the model are contributed, together with interpretation guidelines to the models behaviour. Finally, experience of the model enables us to make some recommendations for the challenging task of building defense techniques against this attack.


international symposium on computers and communications | 2005

Detection of Web-based attacks through Markovian protocol parsing

Juan M. Estevez-Tapiador; Pedro García-Teodoro; Jesús E. Díaz-Verdejo

This paper presents a novel approach based on the monitoring of incoming HTTP requests to detect attacks against Web servers. The detection is accomplished through a Markovian model whose states and transitions between them are determined from the specification of the HTTP protocol while the probabilities of the symbols associated to the Markovian source are obtained during a training stage according to a set of attack-free requests for the target server. The experiments carried out show a high detection capability with low false positive rates at reasonable computation requirements.


Computer Networks | 2007

Evaluation of a low-rate DoS attack against iterative servers

Gabriel Maciá-Fernández; Jesús E. Díaz-Verdejo; Pedro García-Teodoro

This paper presents a low-rate DoS attack that could be launched against iterative servers. Such an attack takes advantage of the vulnerability consisting in the possibility of forecasting the instant at which an iterative server will generate a response to a client request. This knowledge could allow a potential intruder to overflow application buffers with relatively low-rate traffic to the server, thus avoiding the usual DoS IDS detection techniques. Besides the fundamentals of the attack, the authors also introduce a mathematical model for evaluating the efficiency of this kind of attack. The evaluation is contrasted with both simulated and real implementations. Some variants of the attack are also studied. The overall results derived from this work show how the proposed low-rate DoS attack could cause an important negative impact on the performance of iterative servers.


Computers & Security | 2008

Evaluation of a low-rate DoS attack against application servers

Gabriel Maciá-Fernández; Jesús E. Díaz-Verdejo; Pedro García-Teodoro

In the network security field there is a need to identify new movements and trends that attackers might adopt, in order to anticipate their attempts with defense and mitigation techniques. The present study explores new approaches that attackers could use in order to make denial of service attacks against application servers. We show that it is possible to launch such attacks by using low-rate traffic directed against servers, and apply the proposed techniques to defeat a persistent HTTP server. The low-rate feature is highly beneficial to the attacker for two main reasons: firstly, because the resources needed to carry out the attack are considerably reduced, easing its execution. Secondly, the attack is more easily hidden to security mechanisms that rely on the detection of high-rate traffic. In this paper, a mechanism that allows the attacker to control the attack load in order to bypass an IDS is contributed. We present the fundamentals of the attack, describing its strategy and design issues. The performance is also evaluated in both simulated and real environments. Finally, a study of possible improvement techniques to be used by the attackers is contributed.


critical information infrastructures security | 2007

LoRDAS: a low-rate dos attack against application servers

Gabriel Maciá-Fernández; Jesús E. Díaz-Verdejo; Pedro García-Teodoro; Francisco de Toro-Negro

In a communication network, there always exist some specific servers that should be considered a critical infrastructure to be protected, specially due to the nature of the services that they provide. In this paper, a low-rate denial of service attack against application servers is presented. The attack gets advantage of known timing mechanisms in the server behaviour to wisely strike ON/OFF attack waveforms that cause denial of service, while the traffic rate sent to the server is controlled, thus allowing to bypass defense mechanisms that rely on the detection of high rate traffics. First, we determine the conditions that a server should present to be considered a potential victim of this attack. As an example, the persistent HTTP server case is presented, being the procedure for striking the attack against it described. Moreover, the efficiency achieved by the attack is evaluated in both simulated and real environments, and its behaviour studied according to the variations on the configuration parameters. The aim of this work is to denounce the feasibility of such attacks in order to motivate the development of defense mechanisms.

Collaboration


Dive into the Pedro García-Teodoro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge