Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edoardo Serra is active.

Publication


Featured researches published by Edoardo Serra.


ACM Transactions on Information and System Security | 2015

Pareto-Optimal Adversarial Defense of Enterprise Systems

Edoardo Serra; Sushil Jajodia; Andrea Pugliese; Antonino Rullo; V. S. Subrahmanian

The National Vulnerability Database (NVD) maintained by the US National Institute of Standards and Technology provides valuable information about vulnerabilities in popular software, as well as any patches available to address these vulnerabilities. Most enterprise security managers today simply patch the most dangerous vulnerabilities—an adversary can thus easily compromise an enterprise by using less important vulnerabilities to penetrate an enterprise. In this article, we capture the vulnerabilities in an enterprise as a Vulnerability Dependency Graph (VDG) and show that attacks graphs can be expressed in them. We first ask the question: What set of vulnerabilities should an attacker exploit in order to maximize his expected impact? We show that this problem can be solved as an integer linear program. The defender would obviously like to minimize the impact of the worst-case attack mounted by the attacker—but the defender also has an obligation to ensure a high productivity within his enterprise. We propose an algorithm that finds a Pareto-optimal solution for the defender that allows him to simultaneously maximize productivity and minimize the cost of patching products on the enterprise network. We have implemented this framework and show that runtimes of our computations are all within acceptable time bounds even for large VDGs containing 30K edges and that the balance between productivity and impact of attacks is also acceptable.


foundations of information and knowledge systems | 2012

Count constraints and the inverse OLAP problem: definition, complexity and a step toward aggregate data exchange

Domenico Saccà; Edoardo Serra; Antonella Guzzo

A typical problem in database theory is to verify whether there exists a relation (or database) instance satisfying a number of given dependency constraints. This problem has recently received a renewed deal of interest within the context of data exchange, but the issue of handling constraints on aggregate data has not been much investigated so far, notwithstanding the relevance of aggregate operations in exchange systems. This paper introduces count constraints that require the results of given count operations on a relation to be within a certain range. Count constraints are defined by a suitable extension of first order predicate calculus, based on set terms, and they are then used in a new decisional problem, the Inverse OLAP: given a star schema, does there exist a relation instance satisfying a set of given count constraints? The new problem turns out to be NEXP complete under various conditions: program complexity, data complexity and combined complexity. Count constraints can be also used into a data exchange system context, where data from the source database are transferred to the target database using aggregate operations.


Theory and Practice of Logic Programming | 2013

A declarative extension of horn clauses, and its significance for datalog and its applications

Mirjana Mazuran; Edoardo Serra; Carlo Zaniolo

FS-rules provide a powerful monotonic extension for Horn clauses that supports monotonic aggregates in recursion by reasoning on the multiplicity of occurrences satisfying existential goals. The least fixpoint semantics, and its equivalent least model semantics, hold for logic programs with FS-rules; moreover, generalized notions of stratification and stable models are easily derived when negated goals are allowed. Finally, the generalization of techniques such as seminaive fixpoint and magic sets, make possible the efficient implementation of Datalog FS , i.e., Datalog with rules with Frequency Support (FS-rules) and stratified negation. A large number of applications that could not be supported efficiently, or could not be expressed at all in stratified Datalog can now be easily expressed and efficiently supported in Datalog FS and a powerful Datalog FS system is now being developed at UCLA.


ACM Transactions on Knowledge Discovery From Data | 2013

Solving inverse frequent itemset mining with infrequency constraints via large-scale linear programs

Antonella Guzzo; Luigi Moccia; Domenico Saccà; Edoardo Serra

Inverse frequent set mining (IFM) is the problem of computing a transaction database D satisfying given support constraints for some itemsets, which are typically the frequent ones. This article proposes a new formulation of IFM, called IFMI (IFM with infrequency constraints), where the itemsets that are not listed as frequent are constrained to be infrequent; that is, they must have a support less than or equal to a specified unique threshold. An instance of IFMI can be seen as an instance of the original IFM by making explicit the infrequency constraints for the minimal infrequent itemsets, corresponding to the so-called negative generator border defined in the literature. The complexity increase from PSPACE (complexity of IFM) to NEXP (complexity of IFMI) is caused by the cardinality of the negative generator border, which can be exponential in the original input size. Therefore, the article introduces a specific problem parameter κ that computes an upper bound to this cardinality using a hypergraph interpretation for which minimal infrequent itemsets correspond to minimal transversals. By fixing a constant k, the article formulates a k-bounded definition of the problem, called k-IFMI, that collects all instances for which the value of the parameter κ is less than or equal to k—its complexity is in PSPACE as for IFM. The bounded problem is encoded as an integer linear program with a large number of variables (actually exponential w.r.t. the number of constraints), which is thereafter approximated by relaxing integer constraints—the decision problem of solving the linear program is proven to be in NP. In order to solve the linear program, a column generation technique is used that is a variation of the simplex method designed to solve large-scale linear programs, in particular with a huge number of variables. The method at each step requires the solution of an auxiliary integer linear program, which is proven to be NP hard in this case and for which a greedy heuristic is presented. The resulting overall column generation solution algorithm enjoys very good scaling as evidenced by the intensive experimentation, thereby paving the way for its application in real-life scenarios.


IEEE Transactions on Computational Social Systems | 2015

APE: A Data-Driven, Behavioral Model-Based Anti-Poaching Engine

Noseong Park; Edoardo Serra; Tom Snitch; V. S. Subrahmanian

We consider the problem of protecting a set of animals such as rhinos and elephants in a game park using D drones and R ranger patrols (on the ground) with R ≥ D. Using two years of data about animal movements in a game park, we propose the probabilistic spatio-temporal graph (pSTG) model of animal movement behaviors and show how we can learn it from the movement data. Using 17 months of data about poacher behavior, we also learn the probability that a region in the game park will be targeted by poachers. We formalize the anti-poaching problem as that of finding a coordinated route for the drones and ranger patrols that maximize the expected number of animals that are protected, given these two models as input and show that it is NP-complete. Because of this, we fine tune classical local search and genetic algorithms to the case of anti-poaching by taking specific advantage of the nature of the anti-poaching problem and its objective function. We develop a measure of the quality of an algorithm to route the drones and ranger patrols called “improvement ratio.” We develop a dynamic programming based APE_Coord_Route algorithm and show that it performs very well in practice, achieving an improvement ratio over 90%.


international conference on data mining | 2009

An Effective Approach to Inverse Frequent Set Mining

Antonella Guzzo; Domenico Saccà; Edoardo Serra

The inverse frequent set mining problem is the problem of computing a database on which a given collection of itemsets must emerge to be frequent. Earlier studies focused on investigating computational and approximability properties of this problem. In this paper, we face it under the pragmatic perspective of defining heuristic solution approaches that are effective and scalable in real scenarios. In particular, a general formulation of the problem is considered where minimum and maximum support constraints can be defined on each itemset, and where no bound is given beforehand on the size of the resulting output database. Within this setting, an algorithm is proposed that always satisfies the maximum support constraints, but which treats minimum support constraints as soft ones that are enforced as long as possible. A thorough experimentation evidences that minimum support constraints are hardly violated in practice, and that such negligible degradation in accuracy (which is unavoidable due to the theoretical intractability of the problem) is compensated by very good scaling performances.


web search and data mining | 2016

Ensemble Models for Data-driven Prediction of Malware Infections

Chanhyun Kang; Noseong Park; B. Aditya Prakash; Edoardo Serra; V. S. Subrahmanian

Given a history of detected malware attacks, can we predict the number of malware infections in a country? Can we do this for different malware and countries? This is an important question which has numerous implications for cyber security, right from designing better anti-virus software, to designing and implementing targeted patches to more accurately measuring the economic impact of breaches. This problem is compounded by the fact that, as externals, we can only detect a fraction of actual malware infections. In this paper we address this problem using data from Symantec covering more than 1.4 million hosts and 50 malware spread across 2 years and multiple countries. We first carefully design domain-based features from both malware and machine-hosts perspectives. Secondly, inspired by epidemiological and information diffusion models, we design a novel temporal non-linear model for malware spread and detection. Finally we present ESM, an ensemble-based approach which combines both these methods to construct a more accurate algorithm. Using extensive experiments spanning multiple malware and countries, we show that ESM can effectively predict malware infection ratios over time (both the actual number and trend) upto 4 times better compared to several baselines on various metrics. Furthermore, ESMs performance is stable and robust even when the number of detected infections is low.


the internet of things | 2017

A Game of Things: Strategic Allocation of Security Resources for IoT

Antonino Rullo; Daniele Midi; Edoardo Serra; Elisa Bertino

In many Internet of ing (IoT) application domains security is a critical requirement, because malicious parties can undermine the eectiveness of IoT-based systems by compromising single components and/or communication channels. us, a security infrastructure is needed to ensure the proper functioning of such systems even under aack. However, it is also critical that security be at a reasonable resource and energy cost, as many IoT devices may not have sucient resources to host expensive security tools. In this paper, we focus on the problem of eciently and eectively securing IoT networks by carefully allocating security tools. We model our problem according to game theory, and provide a Paretooptimal solution, in which the cost of the security infrastructure, its energy consumption, and the probability of a successful aack, are minimized. Our experimental evaluation shows that our technique improves the system robustness in terms of packet delivery rate for dierent network topologies.


international conference on distributed computing systems | 2016

Strategic Security Resource Allocation for Internet of Things

Antonino Rullo; Daniele Midi; Edoardo Serra; Elisa Bertino

In many Internet of Thing (IoT) application domains security is a critical requirement, because malicious parties can undermine the effectiveness of IoT-based systems by compromising single components and/or communication channels. Thus, a security infrastructure is needed to ensure the proper functioning of such systems even under attack. In this paper, we focus on the problem of efficiently and effectively securing IoT networks by carefully allocating security tools.


IEEE Transactions on Computational Social Systems | 2014

A Survey of Quantitative Models of Terror Group Behavior and an Analysis of Strategic Disclosure of Behavioral Models

Edoardo Serra; V. S. Subrahmanian

There are many applications (e.g., counter-terrorism) where we can automatically learn a quantitative model from realworld data about terror group behavior. In this paper, we first provide a survey of quantitative models of terrorist groups. To date, however, the best-known quantitative models of terror group behavior are based on various types of quantitative logic programs. After our survey, we address an important question posed to us by Nobel laureate, Tom Schelling. Once a set of quantitative logic behavior rules about an adversary has been learned, should these rules be disclosed or not? We develop a game theoretic framework in order to answer this question with a defender who has to decide what rules to release publicly and which ones to keep hidden. We first study the attackers optimal attack strategy, given a set of disclosed rules, and then we study the problem of which rules to disclose so that the attackers optimal strategy has minimal effectiveness. We study the complexity of both problems, present algorithms to solve both, and then present a (1-1/e )-approximation algorithm that (under some restrictions) uses a submodularity property to compute the optimal defender strategy. Finally, we provide experimental results showing that our framework works well in practice-these results are also shown to be statistically significant.

Collaboration


Dive into the Edoardo Serra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlo Zaniolo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oxana Korzh

Boise State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge