Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sergio Greco is active.

Publication


Featured researches published by Sergio Greco.


symposium on principles of database systems | 1991

Minimum and maximum predicates in logic programming

Sumit Ganguly; Sergio Greco; Carlo Zaniolo

A novel approach is proposed for ezpresaing and computing eficienily a large cla88 of problem8, including jinding the shortest path in a graph, that were previously considered impervious to an efiient treatment in the declarative framework of logic-baaed languageu. Our approach w based on the u8e of ruin and nmx predicate having a jht-order semantica defined using mleu w“th negation in their bodien. We show that when certain monotonicity condition8 hold then (1) there ezists a total well-founded model for these progmrnn containing negation, (2) this model can be computed eflciently using a procedure called greedy fixpoint, and (3) the original program can be rewritten into a more eficient one by puuhing rnin and max predicate8 into recursion. The greedy jizpoint evaluation of the program expressing the shorted path problem coincideu with Dijkdra’s algon”thm.


Pattern Recognition | 2009

A time series representation model for accurate and fast similarity detection

Francesco Gullo; Giovanni Ponti; Andrea Tagarelli; Sergio Greco

Similarity search and detection is a central problem in time series data processing and management. Most approaches to this problem have been developed around the notion of dynamic time warping, whereas several dimensionality reduction techniques have been proposed to improve the efficiency of similarity searches. Due to the continuous increasing of sources of time series data and the cruciality of real-world applications that use such data, we believe there is a challenging demand for supporting similarity detection in time series in a both accurate and fast way. Our proposal is to define a concise yet feature-rich representation of time series, on which the dynamic time warping can be applied for effective and efficient similarity detection of time series. We present the Derivative time series Segment Approximation (DSA) representation model, which originally features derivative estimation, segmentation and segment approximation to provide both high sensitivity in capturing the main trends of time series and data compression. We extensively compare DSA with state-of-the-art similarity methods and dimensionality reduction techniques in clustering and classification frameworks. Experimental evidence from effectiveness and efficiency tests on various datasets shows that DSA is well-suited to support both accurate and fast similarity detection.


IEEE Transactions on Knowledge and Data Engineering | 1992

COMPLEX: an object-oriented logic programming system

Sergio Greco; Nicola Leone; Pasquale Rullo

The design and a prototypical implementation of COMPLEX, which is a logic-based system extended with concepts from the object-oriented paradigm and is intended as a tool for the development of knowledge-based applications, are described. The system supports a logic language, called Complex-Datalog (C-Datalog), enhanced by semantic constructs to provide facility for data abstraction. Its implementation is based on a bottom-up computational model that guarantees a fully declarative style of programming. However, the user is also given the possibility of running a query using a top-down model of computation. Efficiency of execution is the result of the integration of different novel technologies for the compilation and the execution of queries. >


ACM Transactions on Information Systems | 2010

Semantic clustering of XML documents

Andrea Tagarelli; Sergio Greco

Dealing with structure and content semantics underlying semistructured documents is challenging for any task of document management and knowledge discovery conceived for such data. In this work we address the novel problem of clustering semantically related XML documents according to their structure and content features. XML features are generated by enriching syntactic with semantic information based on a lexical knowledge base. The backbone of the proposed framework for the semantic clustering of XML documents is a data representation model that exploits the notion of tree tuple to identify semantically cohesive substructures in XML documents and represent them as transactional data. This framework is equipped with two clustering algorithms based on different paradigms, namely centroid-based partitional clustering and frequent-itemset-based hierarchical clustering. An extensive experimental evaluation was conducted on real data sets from various domains, showing the significance of our approach as a solution for the semantic clustering of XML documents.


IEEE Transactions on Knowledge and Data Engineering | 2003

Binding propagation techniques for the optimization of bound disjunctive queries

Sergio Greco

This paper presents a technique for the optimization of bound queries on disjunctive deductive databases. The optimization is based on the rewriting of the source program into an equivalent program which can be evaluated more efficiently. The proposed optimization reduces the amount of data needed to answer the query and, consequently, 1) reduces the complexity of computing a single model and, more importantly, 2) greatly reduces the number of models to be considered. Although, in this paper, we consider the application of the magic-set method, other rewriting techniques defined for special classes of queries can also be applied. To show the relevance of our technique, we have implemented a prototype of an optimizer. Several experiments have confirmed the value of the technique.


pervasive computing and communications | 2005

A distributed system for answering range queries on sensor network data

Alfredo Cuzzocrea; Filippo Furfaro; Sergio Greco; Elio Masciari; Giuseppe M. Mazzeo; Domenico Saccà

A distributed system for approximate query answering on sensor network data is proposed, where a suitable compression technique is exploited to represent data and support query answering. Each node of the system stores either detailed or summarized sensor readings. Query answers are computed by identifying the set of nodes that contain (either compressed or not) data involved in the query, and eventually partitioning the query in a set of sub-queries to be evaluated at different nodes. Queries are partitioned according to a cost model aiming at making the evaluation efficient and guaranteeing the desired degree of accuracy of query answers.


Annals of Mathematics and Artificial Intelligence | 1997

Programming with non-determinism in deductive databases

Fosca Giannotti; Sergio Greco; Domenico Saccà; Carlo Zaniolo

While non-determinism has long been established as a key concept in logic pro-gramming, its importance in the context of deductive databases was recognized only recently. This paper provides an overview of recent results on this topic with the aim of providing an introduction to the theory and practice of non-determinism in deductive databases. In particular we (i) recall the main results linking non-deterministic constructs in database languages to the theory of data complexity and the expressibility hierarchy of query languages; (ii) provide a reasoned introduction to effective programming with non-deterministic constructs; (iii) compare the usage of non-deterministic constructs in languages such as LDL++ to that of traditional logic programming languages; (iv) discuss the link between the semantics of logic programs with non-deterministic constructs and the stable-model semantics of logic programs with negation.


IEEE Transactions on Knowledge and Data Engineering | 2009

Active Integrity Constraints for Database Consistency Maintenance

Luciano Caroprese; Sergio Greco; Ester Zumpano

This paper introduces active integrity constraints (AICs), an extension of integrity constraints for consistent database maintenance. An active integrity constraint is a special constraint whose body contains a conjunction of literals which must be false and whose head contains a disjunction of update actions representing actions (insertions and deletions of tuples) to be performed if the constraint is not satisfied (that is its body is true). The AICs work in a domino-like manner as the satisfaction of one AIC may trigger the violation and therefore the activation of another one. The paper also introduces founded repairs, which are minimal sets of update actions that make the database consistent, and are specified and ldquosupportedrdquo by active integrity constraints. The paper presents: 1) a formal declarative semantics allowing the computation of founded repairs and 2) a characterization of this semantics obtained by rewriting active integrity constraints into disjunctive logic rules, so that founded repairs can be derived from the answer sets of the derived logic program. Finally, the paper studies the computational complexity of computing founded repairs.


international xml database symposium | 2003

Repairs and Consistent Answers for XML Data with Functional Dependencies

Sergio Flesca; Filippo Furfaro; Sergio Greco; Ester Zumpano

In this paper we consider the problem of XML data which may be inconsistent with respect to a set of functional dependencies. We propose a technique for computing repairs (minimal sets of update operations making data consistent) and consistent answers. More specifically, our repairs are based on i) the replacing of values associated with attributes and elements, and ii) the introduction of a function stating if the node information is reliable.


international conference on database theory | 1995

DATALOG Queries with Stratified Negation and Choice: from P to DP

Sergio Greco; Domenico Saccà; Carlo Zaniolo

This paper introduces a unified solution to the problem of extending stratified DATALOG to express DB-complexity classes ranging from P to DP. The solution is based on (i) stratified negation as the core of a simple, declarative semantics for negation, (ii) the use of a “choice” construct to capture non-determinism of stable models (iii) the ability to bind a query execution to the complexity class that includes the problem at hand, and (iv) a general algorithm that ensures efficient execution for the different complexity classes. We thus obtain a class of DATALOG programs that preserves computational tractability, while achieving completeness for a wide range of complexity classes.

Collaboration


Dive into the Sergio Greco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlo Zaniolo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge