Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Domenico Saccà is active.

Publication


Featured researches published by Domenico Saccà.


symposium on principles of database systems | 1990

Stable models and non-determinism in logic programs with negation

Domenico Saccà; Carlo Zaniolo

Previous researchers have proposed generalizations of Horn clause logic to support negation and non-determinism as two separate extensions. In this paper, we show that the stable model semantics for logic programs provides a unified basis for the treatment of both concepts. First, we introduce the concepts of partial models, stable models, strongly founded models and deterministic models and other interesting classes of partial models and study their relationships. We show that the maximal deterministic model of a program is a subset of the intersection of all its stable models and that the well-founded model of a program is a subset of its maximal deterministic model. Then, we show that the use of stable models subsumes the use of the non-deterministic choice construct in LDL and provides an alternative definition of the semantics of this construct. Finally, we provide a constructive definition for stable models with the introduction of a procedure, called backtracking fixpoint, that non-deterministically constructs a total stable model, if such a model exists.


international conference on database theory | 1986

The generalized counting method for recursive logic queries

Domenico Saccà; Carlo Zaniolo

This paper treats the problem of implementing efficiently recursive Horn Clauses queries, including those with function symbols. In particular, the situation is studied where the initial bindings of the arguments in the recursive query goal can be used in the top-down (as in backward chaining) execution phase to improve the efficiency and, often, to guarantee the termination, of the forward chaining execution phase that implements the fixpoint computation for the recursive query. A general method is given for solving these queries; the method performs an analysis of the binding passing behavior of the query, and then reschedules the overall execution as two fixpoint computations derived as results of this analysis. The first such computation emulates the propagation of bindings in the top-down phase; the second generates the desired answer by proving the goals left unsolved during the previous step. Finally, sufficient conditions for safety are derived, to ensure that the fixpoint computations are completed in a finite number of steps.


SIAM Journal on Computing | 1986

Minimal representation of directed hypergraphs

Giorgio Ausiello; Alessandro D'Atri; Domenico Saccà

In this paper the problem of minimal representations for particular classes of directed hypergraphs is analyzed. Various concepts of minimal representations of directed hypergraphs (called minimal equivalent hypergraphs) are introduced as extensions to the concepts of transitive reduction and minimum equivalent graph of directed graphs. In particular, we consider equivalent hypergraphs which are minimal with respect to all parameters which may be adopted to characterize a given hypergraph (number of hyperarcs, number of adjacency lists required for the representation, length of the overall description,etc.). The relationships among the various concepts of minimality are discussed and their computational properties are analyzed. In order to derive such results, a graph representation of hypergraphs is introduced.


international conference on management of data | 1987

Magic counting methods

Domenico Saccà; Carlo Zaniolo

The problem considered is that of implementing recursive queries, expressed in a logic-based language, by efficient fixpoint computations. In particular, the situation is studied where the initial bindings in the recursive predicate can be used to restrict the search space and ensure safety of execution. Two key techniques previously proposed to solve this problem are (i) the highly efficient counting method, and (ii) the magic set method which is safe in a wider range of situations than (i). In this paper, we present a family of methods, called the magic counting methods, that combines the advantages of (i) and (ii). This is made possible by the similarity of the strategies used by the counting method and the magic set method for propagating the bindings. This paper introduces these new methods, examines their computational complexity, and illustrates the trade-offs between the family members and their superiority with respect to the old methods.


Journal of the ACM | 1983

Graph Algorithms for Functional Dependency Manipulation

Giorgio Ausiello; Alessandro D'Atri; Domenico Saccà

Abstract. A graph-theoretic approach for the representation of functional dependenoes in relauonal databases is introduced and applied in the construction of algorithms for manipulating dependencies. This approach allows a homogeneous treatment of several problems (closure, minimization, etc.), which leads to simpler proofs and, m some cases, more efficient algorithms than in the current literature. Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumencal Algorithms and Problems--computations on d, screte structures; G.2.2 [Diserete Mathematics]: Graph Theory--graph algorithms; H.2.1 [Database Management]: Logical Design-normal forms; schema and subschema General Terms: Algorithms, Design, Management, Theory Additional Key Words and Phrases: Closure, computational complexity, functional dependency, FD- graph, minimal coverings, relational database 1. Introduction


international database engineering and applications symposium | 2013

Big data: a research agenda

Alfredo Cuzzocrea; Domenico Saccà; Jeffrey D. Ullman

Recently, a great deal of interest for Big Data has risen, mainly driven from a widespread number of research problems strongly related to real-life applications and systems, such as representing, modeling, processing, querying and mining massive, distributed, large-scale repositories (mostly being of unstructured nature). Inspired by this main trend, in this paper we discuss three important aspects of Big Data research, namely OLAP over Big Data, Big Data Posting, and Privacy of Big Data. We also depict future research directions, hence implicitly defining a research agenda aiming at leading future challenges in this research field.


business process management | 2007

Process mining based on clustering: a quest for precision

Ana Karla Alves de Medeiros; Antonella Guzzo; Gianluigi Greco; Wil M. P. van der Aalst; A.J.M.M. Weijters; Boudewijn F. van Dongen; Domenico Saccà

Process mining techniques attempt to extract non-trivial and useful information from event logs recorded by information systems. For example, there are many process mining techniques to automatically discover a process model based on some event log. Most of these algorithms perform well on structured processes with little disturbances. However, in reality it is difficult to determine the scope of a process and typically there are all kinds of disturbances. As a result, process mining techniques produce spaghetti-like models that are difficult to read and that attempt to merge unrelated cases. To address these problems, we use an approach where the event log is clustered iteratively such that each of the resulting clusters corresponds to a coherent set of cases that can be adequately represented by a process model. The approach allows for different clustering and process discovery algorithms. In this paper, we provide a particular clustering algorithm that avoids over-generalization and a process discovery algorithm that is much more robust than the algorithms described in literature [1]. The whole approach has been implemented in ProM.


international database engineering and applications symposium | 1998

Semi-automatic, semantic discovery of properties from database schemes

Luigi Palopoli; Domenico Saccà; Domenico Ursino

An important tool for the integration of large federated database systems is a global dictionary describing all the involved schemes into an unified framework. The first step in the construction of such a dictionary is the discovery of the properties holding among objects in different schemes. This paper presents novel algorithms to discover possible synonyms, homonyms and inclusions. In addition, the paper also deals with another crucial step in the construction of a dictionary: schema integration. The approach proposed for this step exploits inter-schema properties discovered in the previous step to achieve schema integration. This approach is also concerned with producing suitable abstractions in order to structure the description of the global dictionary into a hierarchy of concepts in order to yield a more flexible, uniform view of the attached databases. The above two steps are interleaved with other steps (mainly devoted to interfacing with database administrators and to validating the discovered properties) and have been experimented with for the construction of a global dictionary for a large number of public administration database systems.


international conference on deductive and object-oriented databases | 1991

Non-determinism in deductive databases

Fosca Giannotti; Dino Pedreschi; Domenico Saccà; Carlo Zaniolo

This paper examines the problem of adding non-deterministic constructs to a declarative database language based on Horn Clause Logic. We revise a previously proposed approach, the choice construct introduced by Krishnamurthy and Naqvi, from the viewpoints of amenability to efficient implementation and expressive power. Thus, we define a construct called dynamic choice, which is consistent with the fixpoint-based semantics, cures the deficiencies of the former approach, and leads to efficient implementations in the framework of deductive databases. Also the new construct extends the expressive power of Datalog programs considerably, as it allows to express negation under Closed World Assumption, as well as a class of relevant deterministic problems.


data warehousing and olap | 2010

Balancing accuracy and privacy of OLAP aggregations on data cubes

Alfredo Cuzzocrea; Domenico Saccà

In this paper we propose an innovative framework based on flexible sampling-based data cube compression techniques for computing privacy preserving OLAP aggregations on data cubes while allowing approximate answers to be efficiently evaluated over such aggregations. In our proposal, this scenario is accomplished by means of the so-called accuracy/privacy contract, which determines how OLAP aggregations must be accessed throughout balancing accuracy of approximate answers and privacy of sensitive ranges of multidimensional data.

Collaboration


Dive into the Domenico Saccà's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlo Zaniolo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Domenico Ursino

Mediterranea University of Reggio Calabria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elio Masciari

Indian Council of Agricultural Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge