Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Petros Efstathopoulos is active.

Publication


Featured researches published by Petros Efstathopoulos.


recent advances in intrusion detection | 2014

Some Vulnerabilities Are Different Than Others

Kartik Nayak; Daniel Marino; Petros Efstathopoulos; Tudor Dumitras

The security of deployed and actively used systems is a moving target, influenced by factors not captured in the existing security metrics. For example, the count and severity of vulnerabilities in source code, as well as the corresponding attack surface, are commonly used as measures of a software product’s security. But these measures do not provide a full picture. For instance, some vulnerabilities are never exploited in the wild, partly due to security technologies that make exploiting them difficult. As for attack surface, its effectiveness has not been validated empirically in the deployment environment. We introduce several security metrics derived from field data that help to complete the picture. They include the count of vulnerabilities exploited and the size of the attack surface actually exercised in real-world attacks. By evaluating these metrics on nearly 300 million reports of intrusion-protection telemetry, collected on more than six million hosts, we conduct an empirical study of security in the deployment environment. We find that none of the products in our study have more than 35% of their disclosed vulnerabilities exploited in the wild. Furthermore, the exploitation ratio and the exercised attack surface tend to decrease with newer product releases. We also find that hosts that quickly upgrade to newer product versions tend to have reduced exercised attack-surfaces. The metrics proposed enable a more complete assessment of the security posture of enterprise infrastructure. Additionally, they open up new research directions for improving security by focusing on the vulnerabilities and attacks that have the highest impact in practice.


passive and active network measurement | 2015

TrackAdvisor: Taking Back Browsing Privacy from Third-Party Trackers

Tai-Ching Li; Huy Hang; Michalis Faloutsos; Petros Efstathopoulos

Even though most web users assume that only the websites that they visit directly become aware of the visit, this belief is incorrect. Many website display contents hosted externally by third-party websites, which can track users and become aware of their web-surfing behavior. This phenomenon is called third-party tracking, and although such activities violate no law, they raise privacy concerns because the tracking is carried out without users’ knowledge or explicit approval. Our work provides a systematic study of the third-party tracking phenomenon. First, we develop TrackAdvisor, arguably the first method that utilizes Machine Learning to identify the HTTP requests carrying sensitive information to third-party trackers with very high accuracy (100 % Recall and 99.4 Precision). Microsoft’s Tracking Protection Lists, which is a widely-used third-party tracking blacklist achieves only a Recall of 72.2 %. Second, we quantify the pervasiveness of the third-party tracking phenomenon: 46 % of the home pages of the websites in Alexa Global Top 10,000 have at least one third-party tracker, and Google, using third-party tracking, monitors 25 % of these popular websites. Our overarching goal is to measure accurately how widespread third-party tracking is and hopefully would raise the public awareness to its potential privacy risks.


international conference for internet technology and secured transactions | 2009

Practical study of a defense against low-rate TCP-targeted DoS attack

Petros Efstathopoulos

It has been proven in theory and through simulations [3, 9] that a low-rate TCP-targeted Denial-of-Service (DoS) attack is possible by exploiting the retransmission timeout (RTO) mechanism of TCP. In contrast to most DoS attacks, this exploit requires periodic, low average volume traffic in order to throttle TCP throughput. Consequently this attack is hard to detect and prevent, since most DoS detection systems are triggered by high-rate traffic. For the attack to be successful, the attacker must inject a short burst of traffic, capable of filling up the bottleneck buffers, right before the expiration of the senders RTO. This forces the senders TCP connections to timeout with very low throughput. The effectiveness of the attack depends on the attackers synchronization with the victims RTO. Certain commercial systems follow the guidelines of RFC-2988 [4] (suggesting a minimum RTO of 1 sec), making this synchronization is far from impossible, while popular operating systems using lower minRTO values (e.g. Linux) are still vulnerable to an attacker using a low latency network. RTO randomization was proposed by [9] as a defense against this attack, since it prevents the attacker from synchronizing attack traffic with RTO expiration intervals. In this paper, we study the results of the attack on a real system (Linux), and evaluate the effectiveness the of RTO randomization in defending against low-rate TCP targeted DoS attacks, showing that the method can prevent a TCP flow from being throttled from attack traffic.


ieee international conference on cloud computing technology and science | 2015

Harbormaster: Policy Enforcement for Containers

Mingwei Zhang; Daniel Marino; Petros Efstathopoulos

Lightweight virtualization, as implemented by application container solutions such as Docker, have the potential to revolutionize the way multi-tier applications are developed and deployed, especially in the cloud. The success of application containers can be partly attributed to their ability to share resources with the underlying platform that hosts them. As such, the isolation provided by such containers is not as strict as with traditional VMs. These very characteristics that have contributed to the success of application containers can also be seen as factors that limit their widespread commercial adoption, since enterprise IT administrators cannot implement the various -- and often fine-grained -- security policies they are required to abide by. This problem is of limited consequence when a host is running a single users application containers. But sharing compute resources among multiple users is an important benefit of containers and cloud-based deployment. In this paper we present a preliminary discussion of the challenges associated with enterprise security policy management for application containers deployed in multi-user environments. Furthermore, we present Harbormaster, a system that addresses some of these challenges by enforcing policy checks on Docker container management operations and allowing administrators to implement the principle of least privilege.


european dependable computing conference | 2012

The Provenance of WINE

Tudor Dumitras; Petros Efstathopoulos

The results of cyber security experiments are often impossible to reproduce, owing to the lack of adequate descriptions of the data collection and experimental processes. Such provenance information is difficult to record consistently when collecting data from distributed sensors and when sharing raw data among research groups with variable standards for documenting the steps that produce the final experimental result. In the WINE benchmark, which provides field data for cyber security experiments, we aim to make the experimental process self-documenting. The data collected includes provenance information -- such as when, where and how an attack was first observed or detected -- and allows researchers to gauge information quality. Experiments are conducted on a common test bed, which provides tools for recording each procedural step. The ability to understand the provenance of research results enables rigorous cyber security experiments, conducted at scale.


ieee international conference on cloud computing technology and science | 2012

File routing middleware for cloud deduplication

Petros Efstathopoulos

Deduplication technology is maturing and becoming a standard feature of most storage architectures. Many approaches have been proposed to address the deduplication scalability challenges of privately owned storage infrastructure, but as storage is moving to the cloud, deduplication mechanisms are expected to scale to thousands of storage nodes. Currently available solutions were not designed to handle such large scale, while research and practical experience suggests that aiming for global deduplication among thousands of nodes will, almost certainly, lead to high complexity, reduced performance and reduced reliability. Instead, we propose the idea of performing local deduplication operations within each cloud node, and introduce file similarity metrics to determine which node is the best deduplication host for a particular incoming file. This approach reduces the problem of scalable cloud deduplication to a file routing problem, which we can address using a software layer capable of making the necessary routing decisions. Using the proposed file routing middleware layer the system can achieve three important properties: scale to thousands of nodes, support almost any type of underlying storage node, and make the most of file-level deduplication.


annual computer security applications conference | 2017

Lean On Me: Mining Internet Service Dependencies From Large-Scale DNS Data

Matteo Dell'Amico; Leyla Bilge; Ashwin Kumar Kayyoor; Petros Efstathopoulos; Pierre-Antoine Vervier

Most websites, services, and applications have come to rely on Internet services (e.g., DNS, CDN, email, WWW, etc.) offered by third parties. Although employing such services generally improves reliability and cost-effectiveness, it also creates dependencies on service providers, which may expose websites to additional risks, such as DDoS attacks or cascading failures. As cloud services are becoming more popular, an increasing percentage of the overall Internet ecosystem relies on a decreasing number of highly popular services. In our general effort to assess the security risk for a given entity, and motivated by the effects of recent service disruptions, we perform a large-scale analysis of passive and active DNS datasets including more than 2.5 trillion queries in order to discover the dependencies between websites and Internet services. In this paper, we present the findings of our DNS dataset analysis, and attempt to expose important insights about the ecosystem of dependencies. To further understand the nature of dependencies, we perform graph-theoretic analysis on the dependency graph and propose support power, a novel power measure that can quantify the amount of dependence websites and other services have on a particular service. Our DNS analysis findings reveal that the current service ecosystem is dominated by a handful of popular service providers---with Amazon being the leader, by far---whose popularity is steadily increasing. These findings are further supported by our graph analysis results, which also reveals a set of less-popular services that many (regional) websites depend on.


usenix annual technical conference | 2011

Building a high-performance deduplication system

Fanglu Guo; Petros Efstathopoulos


Archive | 2010

System and method for high performance deduplication indexing

Petros Efstathopoulos; Fanglu Guo


Archive | 2011

Request Batching and Asynchronous Request Execution For Deduplication Servers

Petros Efstathopoulos

Collaboration


Dive into the Petros Efstathopoulos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tai-Ching Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge