Luigi Pontieri
Indian Council of Agricultural Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luigi Pontieri.
business process management | 2012
Wil M. P. van der Aalst; A Arya Adriansyah; Ana Karla Alves de Medeiros; Franco Arcieri; Thomas Baier; Tobias Blickle; R. P. Jagadeesh Chandra Bose; Peter van den Brand; Ronald Brandtjen; Joos C. A. M. Buijs; Andrea Burattin; Josep Carmona; Malu Castellanos; Jan Claes; Jonathan E. Cook; Nicola Costantini; Francisco Curbera; Ernesto Damiani; Massimiliano de Leoni; Pavlos Delias; Boudewijn F. van Dongen; Marlon Dumas; Schahram Dustdar; Dirk Fahland; Diogo R. Ferreira; Walid Gaaloul; Frank van Geffen; Sukriti Goel; Cw Christian Günther; Antonella Guzzo
Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.
IEEE Transactions on Knowledge and Data Engineering | 2005
Sergio Flesca; Giuseppe Manco; Elio Masciari; Luigi Pontieri; Andrea Pugliese
Because of the widespread diffusion of semistructured data in XML format, much research effort is currently devoted to support the storage and retrieval of large collections of such documents. XML documents can be compared as to their structural similarity, in order to group them into clusters so that different storage, retrieval, and processing techniques can be effectively exploited. In this scenario, an efficient and effective similarity function is the key of a successful data management process. We present an approach for detecting structural similarity between XML documents which significantly differs from standard methods based on graph-matching algorithms, and allows a significant reduction of the required computation costs. Our proposal roughly consists of linearizing the structure of each XML document, by representing it as a numerical sequence and, then, comparing such sequences through the analysis of their frequencies. First, some basic strategies for encoding a document are proposed, which can focus on diverse structural facets. Moreover, the theory of discrete Fourier transform is exploited to effectively and efficiently compare the encoded documents (i.e., signals) in the domain of frequencies. Experimental results reveal the effectiveness of the approach, also in comparison with standard methods.
OTM Confederated International Conferences "On the Move to Meaningful Internet Systems" | 2012
Francesco Folino; Massimo Guarascio; Luigi Pontieri
Discovering predictive models for run-time support is an emerging topic in Process Mining research, which can effectively help optimize business process enactments. However, making accurate estimates is not easy especially when considering fine-grain performance measures (e.g., processing times) on a complex and flexible business process, where performance patterns change over time, depending on both case properties and context factors (e.g., seasonality, workload). We try to face such a situation by using an ad-hoc predictive clustering approach, where different context-related execution scenarios are discovered and modeled accurately via distinct state-aware performance predictors. A readable predictive model is obtained eventually, which can make performance forecasts for any new running process case, by using the predictor of the cluster it is estimated to belong to. The approach was implemented in a system prototype, and validated on a real-life context. Test results confirmed the scalability of the approach, and its efficacy in predicting processing times and associated SLA violations.
business process management | 2005
Gianluigi Greco; Antonella Guzzo; Luigi Pontieri
Process mining techniques have been receiving great attention in the literature for their ability to automatically support process (re)design. The output of these techniques is a concrete workflow schema that models all the possible execution scenarios registered in the logs, and that can be profitably used to support further-coming enactments. In this paper, we face process mining in a slightly different perspective. Indeed, we propose an approach to process mining that combines novel discovery strategies with abstraction methods, with the aim of producing hierarchical views of the process that satisfactorily capture its behavior at different level of details. Therefore, at the highest level of detail, the mined model can support the design of concrete workflows; at lower levels of detail, the views can be used in advanced business process platforms to support monitoring and analysis. Our approach consists of several algorithms which have been integrated into a systems architecture whose description is accounted for in the paper as well.
pacific-asia conference on knowledge discovery and data mining | 2004
Gianluigi Greco; Antonella Guzzo; Luigi Pontieri; Domenico Saccà
We propose a general framework for the process mining problem which encompasses the assumption of workflow schema with local constraints only, for it being applicable to more expressive specification languages, independently of the particular syntax adopted. In fact, we provide an effective technique for process mining based on the rather unexplored concept of clustering workflow executions, in which clusters of executions sharing the same structure and the same unexpected behavior (w.r.t. the local properties) are seen as a witness of the existence of global constraints.
data and knowledge engineering | 2008
Gianluigi Greco; Antonella Guzzo; Luigi Pontieri
Process mining techniques have been receiving great attention in the literature for their ability to automatically support process (re)design. Typically, these techniques discover a concrete workflow schema modelling all possible execution patterns registered in a given log, which can be exploited subsequently to support further-coming enactments. In this paper, an approach to process mining is introduced that extends classical discovery mechanisms by means of an abstraction method aimed at producing a taxonomy of workflow models. The taxonomy is built to capture the process behavior at different levels of detail. Indeed, the most-detailed mined models, i.e., the leafs of the taxonomy, are meant to support the design of concrete workflows, as it happens with existing techniques in the literature. The other models, i.e., non-leaf nodes of the taxonomy, represent instead abstract views over the process behavior that can be used to support advanced monitoring and analysis tasks. All the techniques discussed in the paper have been implemented, tested, and made available as a plugin for a popular process mining framework (ProM). A series of tests, performed on different synthesized and real datasets, evidenced the capability of the approach to characterize the behavior encoded in input logs in a precise and complete way, achieving compelling conformance results even in the presence of complex behavior and noisy data. Moreover, encouraging results have been obtained in a real-life application scenario, where it is shown how the taxonomical view of the process can effectively support an explorative ex-post analysis, hinged on the different kinds of process execution discovered from the logs.
database and expert systems applications | 2004
Gianluigi Greco; Antonella Guzzo; Luigi Pontieri; Domenico Saccà
Designing, analyzing and managing complex processes are recently become crucial issues in most application contexts, such as e-commerce, business process (re-)engineering, Web/grid computing. In this paper, we propose a framework that supports the designer in the definition and in the analysis of complex processes by means of several facilities for reusing, customizing and generalizing existent process components. To this aim we tightly integrate process models with a domain ontology and an activity ontology, so providing a semantic vision of the application context and of the processes themselves. Moreover, the framework is equipped with a set of techniques providing for advanced functionalities, which can be very useful when building and analyzing process models, such as consistency checking, interactive ontology navigation, automatic (re)discovering of process models. A software architecture fully supporting our framework is also presented and discussed.
data and knowledge engineering | 2000
Luigi Palopoli; Luigi Pontieri; Giorgio Terracina; Domenico Ursino
Abstract This paper presents two techniques to integrate and abstract database schemes. The techniques assume the existence of a collection of interscheme properties describing semantic relationships holding among input database scheme objects. The former technique uses interscheme properties to produce an integrated scheme encoding a global, unified view of the whole semantics represented within input schemes. The latter one takes a (integrated) scheme as the input and yields in output an abstracted scheme encoding the same semantics as the input scheme, but represented at an higher, application-dependent abstraction level. In addition, the paper illustrates a possible application of these algorithms to the construction of a data repository. Finally, the paper presents the application of proposed techniques to some database schemes of Italian Central Governmental Offices.
international conference on data engineering | 2002
Francesco Buccafurri; Domenico Rosaci; Luigi Pontieri; Domenico Saccà
Histograms are used to summarize the contents of relations for the estimation of query result sizes into a number of buckets. Several techniques (e.g., MaxDiff and V-Optimal) have been proposed in the past for determining bucket boundaries which provide better estimations. This paper proposes to use 32 bit information (4-level tree index) for each bucket for storing approximated cumulative frequencies at 7 internal intervals of a bucket. Both theoretical analysis and experimental results show that the 4-level tree index provides the best frequency estimation inside a bucket. The index is later added to two well-known techniques for constructing histograms, MaxDiff and V-Optimal, thus obtaining high improvements in the frequency estimation over inter-bucket ranges w.r.t. the original methods.
international syposium on methodologies for intelligent systems | 2008
Lucantonio Ghionna; Gianluigi Greco; Antonella Guzzo; Luigi Pontieri
Classical outlier detection approaches may hardly fit process mining applications, since in these settings anomalies emerge not only as deviations from the sequence of events most often registered in the log, but also as deviations from the behavior prescribed by some (possibly unknown) process model. These issues have been faced in the paper via an approach for singling out anomalous evolutions within a set of process traces, which takes into account both statistical properties of the log and the constraints associated with the process model. The approach combines the discovery of frequent execution patterns with a cluster-based anomaly detection procedure; notably, this procedure is suited to deal with categorical data and is, hence, interesting in its own, given that outlier detection has mainly been studied on numerical domains in the literature. All the algorithms presented in the paper have been implemented and integrated into a system prototype that has been thoroughly tested to assess its scalability and effectiveness.