Martin Grill
Czech Technical University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Grill.
Computers & Security | 2014
Sebastian Garcia; Martin Grill; Jan Stiborek; Alejandro Zunino
The results of botnet detection methods are usually presented without any comparison. Although it is generally accepted that more comparisons with third-party methods may help to improve the area, few papers could do it. Among the factors that prevent a comparison are the difficulties to share a dataset, the lack of a good dataset, the absence of a proper description of the methods and the lack of a comparison methodology. This paper compares the output of three different botnet detection methods by executing them over a new, real, labeled and large botnet dataset. This dataset includes botnet, normal and background traffic. The results of our two methods (BClus and CAMNEP) and BotHunter were compared using a methodology and a novel error metric designed for botnet detections methods. We conclude that comparing methods indeed helps to better estimate how good the methods are, to improve the algorithms, to build better datasets and to build a comparison methodology.
IEEE Intelligent Systems | 2009
Martin Rehak; Michal Pechoucek; Martin Grill; Jan Stiborek; Karel Bartos; Pavel Čeleda
Individual anomaly-detection methods for monitoring computer network traffic have relatively high error rates. An agent-based trust-modeling system fuses anomaly data and progressively improves classification to achieve acceptable error rates.
recent advances in intrusion detection | 2009
Martin Rehak; Eugen Staab; Volker Fusenig; Michal Pěchouček; Martin Grill; Jan Stiborek; Karel Bartos; Thomas Engel
Our work proposes a generic architecture for runtime monitoring and optimization of IDS based on the challenge insertion. The challenges, known instances of malicious or legitimate behavior, are inserted into the network traffic represented by NetFlow records, processed with the current traffic and the systems response to the challenges is used to determine its effectiveness and to fine-tune its parameters. The insertion of challenges is based on the threat models expressed as attack trees with attached risk/loss values. The use of threat model allows the system to measure the expected undetected loss and to improve its performance with respect to the relevant threats, as we have verified in the experiments performed on live network traffic.
cooperative information agents | 2008
Martin Rehak; Michal Pěchouček; Martin Grill; Karel Bartos
We present a method that improves the results of network intrusion detection by integrating several anomaly detection algorithms through trust and reputation models. Our algorithm is based on existing network behavior analysis approaches that are embodied into several detection agents. We divide the processing into three distinct phases: anomaly detection, trust model update and collective trusting decision. Each of these phases contributes to the reduction of classification error rate, by the aggregation of anomaly values provided by individual algorithms, individual update of each agents trust model based on distinct traffic representation features (derived from its anomaly detection model), and re-aggregation of the trustfulness data provided by individual agents. The result is a trustfulness score for each network flow, which can be used to guide the manual inspection, thus significantly reducing the amount of traffic to analyze. To evaluate the effectiveness of the method, we present a set of experiments performed on real network data.
ieee wic acm international conference on intelligent agent technology | 2007
Martin Rehak; Michal Pechoucek; Karel Bartos; Martin Grill; Pavel Čeleda
We apply advanced agent trust modeling techniques to identify malicious traffic in computer networks. Our work integrates four state-of-the-art techniques from anomaly detection, and combines them by means of extended trust model. Deployment of trust model ensures interoperability between methods, allows cross-correlation of results during various stages of the detection and ensures efficient evaluation of current traffic in the context of historical observations. The goal of the system, which is designed for online monitoring of high-speed network, is to provide efficient tool for targeted runtime surveillance of malicious traffic by network operators. We aim to achieve this objective by filtering out the non-malicious (trusted) part of the traffic and submitting only potentially malicious flows for subsequent semi-automatic inspection.
integrated network management | 2015
Martin Grill; Ivan Nikolaev; Veronica Valeros; Martin Rehak
Botnet detection systems struggle with performance and privacy issues when analyzing data from large-scale networks. Deep packet inspection, reverse engineering, clustering and other time consuming approaches are unfeasible for large-scale networks. Therefore, many researchers focus on fast and simple botnet detection methods that use as little information as possible to avoid privacy violations. We present a novel technique for detecting malware using Domain Generation Algorithms (DGA), that is able to evaluate data from large scale networks without reverse engineering a binary or performing Non-Existent Domain (NXDomain) inspection. We propose to use a statistical approach and model the ratio of DNS requests and visited IPs for every host in the local network and label the deviations from this model as DGA-performing malware. We expect the malware to try to resolve more domains during a small time interval without a corresponding amount of newly visited IPs. For this we need only the NetFlow/IPFIX statistics collected from the network of interest. These can be generated by almost any modern router. We show that by using this approach we are able to identify DGA-based malware with zero to very few false positives. Because of the simplicity of our approach we can inspect data from very large networks with minimal computational costs.
Journal of Computer and System Sciences | 2017
Martin Grill; Tomáš Pevný; Martin Rehak
Abstract Network intrusion detection systems based on the anomaly detection paradigm have high false alarm rate making them difficult to use. To address this weakness, we propose to smooth the outputs of anomaly detectors by online Local Adaptive Multivariate Smoothing (LAMS). LAMS can reduce a large portion of false positives introduced by the anomaly detection by replacing the anomaly detectors output on a network event with an aggregate of its output on all similar network events observed previously. The arguments are supported by extensive experimental evaluation involving several anomaly detectors in two domains: NetFlow and proxy logs. Finally, we show how the proposed solution can be efficiently implemented to process large streams of non-stationary data.
international workshop on information forensics and security | 2014
Martin Grill; Martin Rehak
Botnet detection systems that use Network Behavioral Analysis (NBA) principle struggle with performance and privacy issues on large-scale networks. Because of that many researchers focus on fast and simple bot detection methods that at the same time use as little information as possible to avoid privacy violations. Next, deep inspections, reverse engineering, clustering and other time consuming approaches are typically unfeasible in large-scale networks. In this paper we present a novel technique that uses User- Agent field contained in the HTTP header, that can be easily obtained from the web proxy logs, to identify malware that uses User-Agents discrepant with the ones actually used by the infected user. We are using statistical information about the usage of the User-Agent of each user together with the usage of particular User-Agent across the whole analyzed network and typically visited domains. Using those statistics we can identify anomalies, which we proved to be caused by malware-infected hosts in the network. Because of our simple and computationally inexpensive approach we can inspect data from extremely large networks with minimal computational costs.
Computer Networks | 2016
Martin Grill; Tomáš Pevný
This paper presents a novel technique of finding a convex combination of outputs of anomaly detectors maximizing the accuracy in ź-quantile of most anomalous samples. Such an approach better reflects the needs in the security domain in which subsequent analysis of alarms is costly and can be done only on a small number of alarms. An extensive experimental evaluation and comparison to prior art on real network data using sets of anomaly detectors of two existing intrusion detection systems shows that the proposed method not only outperforms prior art, it is also more robust to noise in training data labels, which is another important feature for deployment in practice.
international workshop on information forensics and security | 2012
Tomas Pevny; Martin Rehak; Martin Grill
This paper focuses on the identification of anomalous hosts within a computer network with the motivation to detect attacks and/or other unwanted and suspicious traffic. The proposed detection method does not use content of packets, which enables the method to be used on encrypted networks. Moreover, the method has very low computational complexity allowing fast detection and response important for limitation of potential damages. The proposed method uses entropies of IP addresses and ports to build two complementary models of hosts traffic based on principal component analysis. These two models are coupled with two orthogonal anomaly definitions, which gives four different detectors. The methods are evaluated and compared to prior art on one week long capture of traffic on university network. The experiments reveals that no single detector can detect all types of anomalies, which is expected and stresses the importance of ensemble approach towards intrusion detection.