Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristian Molinaro is active.

Publication


Featured researches published by Cristian Molinaro.


IEEE Transactions on Knowledge and Data Engineering | 2014

Discovering the Top-k Unexplained Sequences in Time-Stamped Observation Data

Massimiliano Albanese; Cristian Molinaro; Fabio Persia; Antonio Picariello; V. S. Subrahmanian

There are numerous applications where we wish to discover unexpected activities in a sequence of time-stamped observation data-for instance, we may want to detect inexplicable events in transactions at a website or in video of an airport tarmac. In this paper, we start with a known set A of activities (both innocuous and dangerous) that we wish to monitor. However, in addition, we wish to identify “unexplained” subsequences in an observation sequence that are poorly explained (e.g., because they may contain occurrences of activities that have never been seen or anticipated before, i.e., they are not in A). We formally define the probability that a sequence of observations is unexplained (totally or partially) w.r.t. A. We develop efficient algorithms to identify the top-k Totally and partially unexplained sequences w.r.t. A. These algorithms leverage theorems that enable us to speed up the search for totally/partially unexplained sequences. We describe experiments using real-world video and cyber-security data sets showing that our approach works well in practice in terms of both running time and accuracy.


Synthesis Lectures on Data Management | 2012

Incomplete Data and Data Dependencies in Relational Databases

Sergio Greco; Cristian Molinaro; Francesca Spezzano

The chase has long been used as a central tool to analyze dependencies and their effect on queries. It has been applied to different relevant problems in database theory such as query optimization, query containment and equivalence, dependency implication, and database schema design. Recent years have seen a renewed interest in the chase as an important tool in several database applications, such as data exchange and integration, query answering in incomplete data, and many others. It is well known that the chase algorithm might be non-terminating and thus, in order for it to find practical applicability, it is crucial to identify cases where its termination is guaranteed. Another important aspect to consider when dealing with the chase is that it can introduce null values into the database, thereby leading to incomplete data. Thus, in several scenarios where the chase is used the problem of dealing with data dependencies and incomplete data arises. This book discusses fundamental issues concerning data dependencies and incomplete data with a particular focus on the chase and its applications in different database areas. We report recent results about the crucial issue of identifying conditions that guarantee the chase termination. Different database applications where the chase is a central tool are discussed with particular attention devoted to query answering in the presence of data dependencies and database schema design. Table of Contents: Introduction / Relational Databases / Incomplete Databases / The Chase Algorithm / Chase Termination / Data Dependencies and Normal Forms / Universal Repairs / Chase and Database Applications


international database engineering and applications symposium | 2006

Integrating and Querying P2P Deductive Databases

Luciano Caroprese; Cristian Molinaro; Ester Zumpano

The paper proposes a logic framework for modeling the interaction among deductive databases and computing consistent answers to logic queries in a P2P environment. As usual, data are exchanged among peers by using logical rules, called mapping rules. The novelty of our approach is that only data not violating integrity constraints are exchanged. The (declarative) semantics of a P2P system is defined in terms of weak models. Under this semantics only facts not making the local databases inconsistent can be imported, and the preferred weak models are the consistent scenarios in which peers import maximal sets of facts not violating integrity constraints. A characterization of the preferred weak model semantics, allowing to model a P2P system with a prioritized logic program, is provided. The proposed framework is then extended in order to also take into account P2P system in which each peer may be locally inconsistent, i.e. its data does not satisfy some of its constraints. Finally, the complexity of P2P logic queries is investigated


international joint conference on artificial intelligence | 2011

Finding unexplained activities in video

Massimiliano Albanese; Cristian Molinaro; Fabio Persia; Antonio Picariello; V. S. Subrahmanian

Consider a video surveillance application that monitors some location. The application knows a set of activity models (that are either normal or abnormal or both), but in addition, the application wants to find video segments that are unexplained by any of the known activity models -- these unexplained video segments may correspond to activities for which no previous activity model existed. In this paper, we formally define what it means for a given video segment to be unexplained (totally or partially) w.r.t. a given set of activity models and a probability threshold. We develop two algorithms - FindTUA and FindPUA - to identify Totally and Partially Unexplained Activities respectively, and show that both algorithms use important pruning methods. We report on experiments with a prototype implementation showing that the algorithms both run efficiently and are accurate.


Theory and Practice of Logic Programming | 2010

atalog: A logic language for expressing search and optimization problems

Sergio Greco; Cristian Molinaro; Irina Trubitsyna; Ester Zumpano

This paper presents a logic language for expressing search and optimization problems. Specifically, first a language obtained by extending (positive) DATALOG with intuitive and efficient constructs (namely, stratified negation, constraints, and exclusive disjunction) is introduced. Next, a further restricted language only using a restricted form of disjunction to define (nondeterministically) subsets (or partitions) of relations is investigated. This language, called atalog , captures the power of DATALOG ¬ in expressing search and optimization problems. A system prototype implementing atalog is presented. The system translates atalog queries into Optimization Programming Language (OPL) programs which are executed by the ILOG OPL Development Studio. Our proposal combines easy formulation of problems, expressed by means of a declarative logic language, with the efficiency of the ILOG System. Several experiments show the effectiveness of this approach.


Annals of Mathematics and Artificial Intelligence | 2007

A three-valued semantics for querying and repairing inconsistent databases

Filippo Furfaro; Sergio Greco; Cristian Molinaro

The problem of managing and querying inconsistent databases has been deeply investigated in the last few years. As the problem of consistent query answering is hard in the general case, most of the techniques proposed so far have an exponential complexity. Polynomial techniques have been proposed only for restricted forms of constraints (such as functional dependencies) and queries. In this paper, a technique for computing “approximate” consistent answers in polynomial time is proposed, which works in the presence of a wide class of constraints (namely, full constraints) and Datalog queries. The proposed approach is based on a repairing strategy where update operations assigning an undefined truth value to the “reliability” of tuples are allowed, along with updates inserting or deleting tuples. The result of a repair can be viewed as a three-valued database which satisfies the specified constraints. In this regard, a new semantics (namely, partial semantics) is introduced for constraint satisfaction in the context of three-valued databases, which aims at capturing the intuitive meaning of constraints under three-valued logic. It is shown that, in order to compute “approximate” consistent query answers, it suffices to evaluate queries by taking into account a unique repair (called deterministic repair), which in some sense “summarizes” all the possible repairs. The so obtained answers are “approximate” in the sense that are safe (true and false atoms in the answers are, respectively, true and false under the classical two-valued semantics), but not complete.


ACM Transactions on Computational Logic | 2013

Using Generalized Annotated Programs to Solve Social Network Diffusion Optimization Problems

Paulo Shakarian; Matthias Broecheler; V. S. Subrahmanian; Cristian Molinaro

There has been extensive work in many different fields on how phenomena of interest (e.g., diseases, innovation, product adoption) “diffuse” through a social network. As social networks increasingly become a fabric of society, there is a need to make “optimal” decisions with respect to an observed model of diffusion. For example, in epidemiology, officials want to find a set of k individuals in a social network which, if treated, would minimize spread of a disease. In marketing, campaign managers try to identify a set of k customers that, if given a free sample, would generate maximal “buzz” about the product. In this article, we first show that the well-known Generalized Annotated Program (GAP) paradigm can be used to express many existing diffusion models. We then define a class of problems called Social Network Diffusion Optimization Problems (SNDOPs). SNDOPs have four parts: (i) a diffusion model expressed as a GAP, (ii) an objective function we want to optimize with respect to a given diffusion model, (iii) an integer k > 0 describing resources (e.g., medication) that can be placed at nodes, (iv) a logical condition VC that governs which nodes can have a resource (e.g., only children above the age of 5 can be treated with a given medication). We study the computational complexity of SNDOPs and show both NP-completeness results as well as results on complexity of approximation. We then develop an exact and a heuristic algorithm to solve a large class of SNDOPproblems and show that our GREEDY-SNDOPs algorithm achieves the best possible approximation ratio that a polynomial algorithm can achieve (unless P = NP). We conclude with a prototype experimental implementation to solve SNDOPs that looks at a real-world Wikipedia dataset consisting of over 103,000 edges.


Theory and Practice of Logic Programming | 2013

Logic programming with function symbols: Checking termination of bottom-up evaluation through program adornments

Sergio Greco; Cristian Molinaro; Irina Trubitsyna

Recent years have witnessed an increasing interest in enhancing answer set solvers by allowing function symbols. Since the introduction of function symbols makes common inference tasks undecidable, research has focused on identifying classes of programs allowing only a restricted use of function symbols while ensuring decidability of common inference tasks. Finitely-ground programs, introduced in Calimeri et al. (2008), are guaranteed to admit a finite number of stable models with each of them of finite size. Stable models of such programs can be computed and thus common inference tasks become decidable. Unfortunately, checking whether a program is finitely-ground is semi-decidable. This has led to several decidable criteria, called termination criteria, providing sufficient conditions for a program to be finitely-ground. This paper presents a new technique that, used in conjunction with current termination criteria, allows us to detect more programs as finitely-ground. Specifically, the proposed technique takes a logic program P and transforms it into an adorned program P with the aim of applying termination criteria to P rather than P. The transformation is sound in that if the adorned program satisfies a certain termination criterion, then the original program is finitely-ground. Importantly, applying termination criteria to adorned programs rather than the original ones strictly enlarges the class of programs recognized as finitely-ground.


data and knowledge engineering | 2010

Polynomial time queries over inconsistent databases with functional dependencies and foreign keys

Cristian Molinaro; Sergio Greco

This paper addresses the problem of efficiently computing consistent answers to queries over relational databases which may be inconsistent with respect to functional dependencies and foreign key constraints. Since consistent query answers over inconsistent databases are obtained from repaired databases, we first present a repair strategy. More specifically, in this paper we consider particular sets of functional dependencies, called canonical, and a repair strategy whereby only tuple updates and insertions are allowed in order to restore consistency: if foreign key constraints are violated, new tuples (possibly containing null values) are inserted into the database, whereas if functional dependency violations occur, tuple updates (possibly introducing unknown values, i.e. special symbols which can take values from a limited set of constants of the source database) are performed. Therefore, we propose a semantics of constraint satisfaction for incomplete databases containing null and unknown values since the repair process can lead to such databases. The proposed approach allows us to obtain a unique (incomplete) repaired database which may be computed in polynomial time. Drawing on the results on the complexity of querying incomplete databases containing OR-objects, we identify classes of constraints for which the consistent answers to particular classes of conjunctive queries can be computed in polynomial time.


scalable uncertainty management | 2013

Aggregate Count Queries in Probabilistic Spatio-temporal Databases

John Grant; Cristian Molinaro; Francesco Parisi

The SPOT database concept was defined several years ago to provide a declarative framework for probabilistic spatio-temporal databases where even the probabilities are uncertain. Earlier work on SPOT focused on the efficient processing of selection queries and updates. In this paper, we deal with aggregate count queries. First, we propose three alternative semantics for the meaning of such a query. Then, we provide polynomial time algorithms for answering count queries under the various semantics and discuss complexity issues.

Collaboration


Dive into the Cristian Molinaro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerardo I. Simari

Universidad Nacional del Sur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Picariello

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge