Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gianluigi Greco is active.

Publication


Featured researches published by Gianluigi Greco.


international conference on management of data | 2005

The INFOMIX system for advanced integration of incomplete and inconsistent data

Nicola Leone; Gianluigi Greco; Giovambattista Ianni; Vincenzino Lio; Giorgio Terracina; Thomas Eiter; Wolfgang Faber; Michael Fink; Georg Gottlob; Riccardo Rosati; Domenico Lembo; Maurizio Lenzerini; Marco Ruzzi; Edyta Kalka; Bartosz Nowicki; Witold Staniszkis

The task of an information integration system is to combine data residing at different sources, providing the user with a unified view of them, called global schema. Users formulate queries over the global schema, and the system suitably queries the sources, providing an answer to the user, who is not obliged to have any information about the sources. Recent developments in IT such as the expansion of the Internet and the World Wide Web, have made available to users a huge number of information sources, generally autonomous, heterogeneous and widely distributed: as a consequence, information integration has emerged as a crucial issue in many application domains, e.g., distributed databases, cooperative information systems, data warehousing, or on-demand computing. Recent estimates view information integration to be a


theoretical aspects of rationality and knowledge | 2003

Pure Nash equilibria: hard and easy games

Georg Gottlob; Gianluigi Greco; Francesco Scarcello

10 Billion market by 2006 [14].


business process management | 2007

Process mining based on clustering: a quest for precision

Ana Karla Alves de Medeiros; Antonella Guzzo; Gianluigi Greco; Wil M. P. van der Aalst; A.J.M.M. Weijters; Boudewijn F. van Dongen; Domenico Saccà

In this paper we investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NP-hard, while deciding whether a game has a strong Nash equilibrium is ΣP2-complete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each players move depends on moves of other players. We say that a game has small neighborhood if the utility function for each player depends only on (the actions of) a logarithmically small number of other players, The dependency structure of a game 𝒢 can he expressed by a graph G(𝒢) or by a hypergraph H(𝒢). Among other results, we show that if 𝒢 has small neighborhood and if H(𝒢) has bounded hypertree width (or if G(𝒢) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFL-complete and thus in the class NC2 of highly parallelizable problems.


international conference on logic programming | 2003

Efficient evaluation of logic programs for querying data integration systems

Thomas Eiter; Michael Fink; Gianluigi Greco; Domenico Lembo

Process mining techniques attempt to extract non-trivial and useful information from event logs recorded by information systems. For example, there are many process mining techniques to automatically discover a process model based on some event log. Most of these algorithms perform well on structured processes with little disturbances. However, in reality it is difficult to determine the scope of a process and typically there are all kinds of disturbances. As a result, process mining techniques produce spaghetti-like models that are difficult to read and that attempt to merge unrelated cases. To address these problems, we use an approach where the event log is clustered iteratively such that each of the resulting clusters corresponds to a coherent set of cases that can be adequately represented by a process model. The approach allows for different clustering and process discovery algorithms. In this paper, we provide a particular clustering algorithm that avoids over-generalization and a process discovery algorithm that is much more robust than the algorithms described in literature [1]. The whole approach has been implemented in ProM.


business process management | 2005

Mining hierarchies of models: from abstract views to concrete specifications

Gianluigi Greco; Antonella Guzzo; Luigi Pontieri

Many data integration systems provide transparent access to heterogeneous data sources through a unified view of all data in terms of a global schema, which may be equipped with integrity constraints on the data. Since these constraints might be violated by the data retrieved from the sources, methods for handling such a situation are needed. To this end, recent approaches model query answering in data integration systems in terms of nonmonotonic logic programs. However, while the theoretical aspects have been deeply analyzed, there are no real implementations of this approach yet. A problem is that the reasoning tasks modeling query answering are computationally expensive in general, and that a direct evaluation on deductive database systems is infeasible for large data sets. In this paper, we investigate techniques which make user query answering by logic programs effective. We develop pruning and localization methods for the data which need to be processed in a deductive system, and a technique for the recombination of the results on a relational database engine. Experiments indicate the viability of our methods and encourage further research of this approach.


international conference on logic programming | 2004

Enhancing the Magic-Set Method for Disjunctive Datalog Programs

Chiara Cumbo; Wolfgang Faber; Gianluigi Greco; Nicola Leone

Process mining techniques have been receiving great attention in the literature for their ability to automatically support process (re)design. The output of these techniques is a concrete workflow schema that models all the possible execution scenarios registered in the logs, and that can be profitably used to support further-coming enactments. In this paper, we face process mining in a slightly different perspective. Indeed, we propose an approach to process mining that combines novel discovery strategies with abstraction methods, with the aim of producing hierarchical views of the process that satisfactorily capture its behavior at different level of details. Therefore, at the highest level of detail, the mined model can support the design of concrete workflows; at lower levels of detail, the views can be used in advanced business process platforms to support monitoring and analysis. Our approach consists of several algorithms which have been integrated into a systems architecture whose description is accounted for in the paper as well.


pacific-asia conference on knowledge discovery and data mining | 2004

Mining Expressive Process Models by Clustering Workflow Traces

Gianluigi Greco; Antonella Guzzo; Luigi Pontieri; Domenico Saccà

We present a new technique for the optimization of (partially) bound queries over disjunctive datalog programs. The technique exploits the propagation of query bindings, and extends the Magic-Set optimization technique (originally defined for non-disjunctive programs) to the disjunctive case, substantially improving on previously defined approaches.


Journal of Computer and System Sciences | 2007

Magic Sets and their application to data integration

Wolfgang Faber; Gianluigi Greco; Nicola Leone

We propose a general framework for the process mining problem which encompasses the assumption of workflow schema with local constraints only, for it being applicable to more expressive specification languages, independently of the particular syntax adopted. In fact, we provide an effective technique for process mining based on the rather unexplored concept of clustering workflow executions, in which clusters of executions sharing the same structure and the same unexpected behavior (w.r.t. the local properties) are seen as a witness of the existence of global constraints.


ACM Transactions on Database Systems | 2008

Repair localization for query answering from inconsistent databases

Thomas Eiter; Michael Fink; Gianluigi Greco; Domenico Lembo

Recently, effective methods model query-answering in data integration systems and inconsistent databases in terms of cautious reasoning over Datalog^@? programs under the stable model semantics. Since this task is computationally expensive (co-NP-complete), there is a clear need of suitable techniques for query optimization, in order to make such methods feasible for data-intensive applications. We propose a generalization of the well-known Magic Sets technique to Datalog^@? programs with (possibly unstratified) negation under the stable model semantics. Our technique produces a new program whose evaluation is more efficient (due to a smaller instantiation) in general, while preserving full query-equivalence for both brave and cautious reasoning, provided that the original program is consistent. Soundness under cautious reasoning is always guaranteed, even if the original program is inconsistent. In order to formally prove the correctness of our Magic Sets transformation, we introduce a novel notion of modularity for Datalog^@? under the stable model semantics, which is more suitable for query answering than previous module definitions. We prove that a query on such a module can be evaluated independently from the rest of the program, while preserving soundness under cautious reasoning. Importantly, for consistent programs, both soundness and completeness are guaranteed for brave reasoning and cautious reasoning. Our Magic Sets optimization constitutes an effective method for enhancing the performance of data integration systems in which query-answering is carried out by means of cautious reasoning over Datalog^@? programs. In fact, results of experiments in the EU project INFOMIX, show that Magic Sets are fundamental for the scalability of the system.


data and knowledge engineering | 2008

Mining taxonomies of process models

Gianluigi Greco; Antonella Guzzo; Luigi Pontieri

Query answering from inconsistent databases amounts to finding “meaningful” answers to queries posed over database instances that do not satisfy integrity constraints specified over their schema. A declarative approach to this problem relies on the notion of repair, that is, a database that satisfies integrity constraints and is obtained from the original inconsistent database by “minimally” adding and/or deleting tuples. Consistent answers to a user query are those answers that are in the evaluation of the query over each repair. Motivated by the fact that computing consistent answers from inconsistent databases is in general intractable, the present paper investigates techniques that allow to localize the difficult part of the computation on a small fragment of the database at hand, called “affected” part. Based on a number of localization results, an approach to query answering from inconsistent data is presented, in which the query is evaluated over each of the repairs of the affected part only, augmented with the part that is not affected. Single query results are then suitably recombined. For some relevant settings, techniques are also discussed to factorize repairs into components that can be processed independently of one another, thereby guaranteeing exponential gain w.r.t. the basic approach, which is not based on localization. The effectiveness of the results is demonstrated for consistent query answering over expressive schemas, based on logic programming specifications as proposed in the literature.

Collaboration


Dive into the Gianluigi Greco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luigi Pontieri

Indian Council of Agricultural Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge