Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where María Teresa Gómez-López is active.

Publication


Featured researches published by María Teresa Gómez-López.


Information Systems | 2015

Compliance validation and diagnosis of business data constraints in business processes at runtime

María Teresa Gómez-López; Rafael M. Gasca; José Miguel Pérez-Álvarez

Business processes involve data that can be modified and updated by various activities at any time. The data involved in a business process can be associated with flow elements or data stored. These data must satisfy the business compliance rules associated with the process, where business compliance rules are policies or statements that govern the behaviour of a company. To improve and automate the validation and diagnosis of compliance rules based on the description of data semantics (called Business Data Constraints), we propose a framework where dataflow variables and stored data are analyzed. The validation and diagnosis process is automated using Constraint Programming, to permit the detection and identification of possibly unsatisfiable Business Data Constraints, even if the data involved in these constraints are not all instantiated. This implies that the potential errors can be determined in advance. Furthermore, a language to describe Business Data Constraints is proposed, for the improvement of user-oriented aspects of the business process description. This language allows a business expert to write Business Data Constraints that will be automatically validated in run-time, without the support of an information technology expert. HighlightsThis paper proposes an enlargement of the business process model for data values.A language is defined for Business Data Constraints associated to each activity.The validation and diagnosis is developed at runtime according to the data values.It permits an early identification of the non-compliance using Constraint Programming.


data and knowledge engineering | 2013

Diagnosing correctness of semantic workflow models

Diana Borrego; Rik Eshuis; María Teresa Gómez-López; Rafael M. Gasca

To model operational business processes in an accurate way, workflow models need to reference both the control flow and dataflow perspectives. Checking the correctness of such workflow models and giving precise feedback in case of errors is challenging due to the interplay between these different perspectives. In this paper, we propose a fully automated approach for diagnosing correctness of semantic workflow models in which the semantics of activities are specified with pre and postconditions. The control flow and dataflow perspectives of a semantic workflow are modeled in an integrated way using Artificial Intelligence techniques (Integer Programming and Constraint Programming). The approach has been implemented in the DiagFlow tool, which reads and diagnoses annotated XPDL models, using a state-of-the-art constraint solver as back end. Using this novel approach, complex semantic workflow models can be verified and diagnosed in an efficient way.


data and knowledge engineering | 2009

Developing a labelled object-relational constraint database architecture for the projection operator

María Teresa Gómez-López; R. Ceballos; Rafael M. Gasca; Carmelo Del Valle

Current relational databases have been developed in order to improve the handling of stored data, however, there are some types of information that have to be analysed for which no suitable tools are available. These new types of data can be represented and treated as constraints, allowing a set of data to be represented through equations, inequations and Boolean combinations of both. To this end, constraint databases were defined and some prototypes were developed. Since there are aspects that can be improved, we propose a new architecture called labelled object-relational constraint database (LORCDB). This provides more expressiveness, since the database is adapted in order to support more types of data, instead of the data having to be adapted to the database. In this paper, the projection operator of SQL is extended so that it works with linear and polynomial constraints and variables of constraints. In order to optimize query evaluation efficiency, some strategies and algorithms have been used to obtain an efficient query plan. Most work on constraint databases uses spatiotemporal data as case studies. However, this paper proposes model-based diagnosis since it is a highly potential research area, and model-based diagnosis permits more complicated queries than spatiotemporal examples. Our architecture permits the queries over constraints to be defined over different sets of variables by using symbolic substitution and elimination of variables.


Information & Software Technology | 2015

Automating correctness verification of artifact-centric business process models

Diana Borrego; Rafael M. Gasca; María Teresa Gómez-López

Artifact-centric business process models are fully automatically verified.Two correctness notions are verified: reachability and weak termination.The models integrate pre and postconditions defining the behavior of the services.Verification of numerical data, even for models formed by several artifacts.Novel verification algorithms check the correctness, offering precise diagnosis. ContextThe artifact-centric methodology has emerged as a new paradigm to support business process management over the last few years. This way, business processes are described from the point of view of the artifacts that are manipulated during the process. ObjectiveOne of the research challenges in this area is the verification of the correctness of this kind of business process models where the model is formed of various artifacts that interact among them. MethodIn this paper, we propose a fully automated approach for verifying correctness of artifact-centric business process models, taking into account that the state (lifecycle) and the values of each artifact (numerical data described by pre and postconditions) influence in the values and the state of the others. The lifecycles of the artifacts and the numerical data managed are modeled by using the Constraint Programming paradigm, an Artificial Intelligence technique. ResultsTwo correctness notions for artifact-centric business process models are distinguished (reachability and weak termination), and novel verification algorithms are developed to check them. The algorithms are complete: neither false positives nor false negatives are generated. Moreover, the algorithms offer precise diagnosis of the detected errors, indicating the execution causing the error where the lifecycle gets stuck. ConclusionTo the best of our knowledge, this paper presents the first verification approach for artifact-centric business process models that integrates pre and postconditions, which define the behavior of the services, and numerical data verification when the model is formed of more than one artifact. The approach can detect errors not detectable with other approaches.


Current Topics in Artificial Intelligence | 2007

NMUS: Structural Analysis for Improving the Derivation of All MUSes in Overconstrained Numeric CSPs

Rafael M. Gasca; Carmelo Del Valle; María Teresa Gómez-López; R. Ceballos

Models are used in science and engineering for experimentation, analysis, model-based diagnosis, design and planning/sheduling applications. Many of these models are overconstrained Numeric Constraint Satisfaction Problems (NCSP), where the numeric constraints could have linear or polynomial relations. In practical scenarios, it is very useful to know which parts of the overconstrained NCSP instances cause the unsolvability. Although there are algorithms to find all optimal solutions for this problem, they are computationally expensive, and hence may not be applicable to large and real-world problems. Our objective is to improve the performance of these algorithms for numeric domains using structural analysis. We provide experimental results showing that the use of the different strategies proposed leads to a substantially improved performance and it facilitates the application of solving larger and more realistic problems.


International Journal of Cooperative Information Systems | 2014

Decision-Making Support for the Correctness of Input Data at Runtime in Business Processes

María Teresa Gómez-López; Rafael M. Gasca; José Miguel Pérez-Álvarez

In a business process, the information that flows between the activities can be introduced by those users who interact with the process. This introduced information could be incorrect due to a lack of knowledge or a mistake. For this reason and to make the business process execution consistent, we propose a Decision Support System (DSS) to inform the user about the possible and correct values that the input data can take. The DSS takes into account the business process model and the policy of the company. The policy concerning the input data and dataflow that the company manages can be represented by constraints (called Business Data Constraints (BDCs)). In order to ascertain all the possible values of the input data that permit the execution of the process following the defined goals, the DSS analyzes the business process model and the BDC, using the constraint programming paradigm.


Expert Systems With Applications | 2014

Using Constraint Programming in Selection Operators for Constraint Databases

María Teresa Gómez-López; Rafael M. Gasca

Constraint Databases represent complex data by means of formulas described by constraints (equations, inequations or Boolean combinations of both). Commercial database management systems allow the storage and efficient retrieval of classic data, but for complex data a made-to-measure solution combined with expert systems for each type of problem are necessary. Therefore, in the same way as commercial solutions of relational databases permit storing and querying classic data, we propose an extension of the Selection Operator for complex data stored, and an extension of SQL language for the case where both classic and constraint data need to be managed. This extension shields the user from unnecessary details on how the information is stored and how the queries are evaluated, thereby enlarging the capacity of expressiveness for any commercial database management system. In order to minimize the selection time, a set of strategies have been proposed, which combine the advantages of relational algebra and constraint data representation.


enterprise distributed object computing | 2016

Process Instance Query Language to Include Process Performance Indicators in DMN

José Miguel Pérez-Álvarez; María Teresa Gómez-López; Luisa Parody; Rafael M. Gasca

Companies are increasingly incorporating commercial Business Process Management Systems (BPMSs) as mechanisms to automate their daily procedures. These BPMSs manage the information related to the instances that flow through the model (business data), and recover the information concerning the process performance (Process Performance Indicators). Process Performance Indicators (PPIs) tend to be used for the detection of possible deviations of expected behaviour, and help in the post-mortem analysis and redesign by improving the goals of the processes. However, not only are PPIs important in terms of their ability to measure and detect a derivation, but they should also be included at decision points to make the business processes more adaptable to the process reality at runtime. In this paper, we propose a complete solution that allows the incorporation of the PPIs into decision tasks, following the Decision Model and Notation (DMN) standard, with the aim of enriching the decisions that can be taken during the process execution. Our proposal firstly includes an extension of the decision rule grammar of the DMN standard, by incorporating the definition and the use of a Process Instance Query Language (PIQL) that offers information about the instances related to the PPIs involved. In order to achieve this objective, a framework has also been developed to support the enrichment of process instance query expressions (PIQEs). This framework combines a set of mature technologies to evaluate the decisions about PPIs at runtime. As an illustration a real sample has been used whose decisions are improved thanks to the incorporation of the PPIs at runtime.


enterprise distributed object computing | 2013

Explaining the Incorrect Temporal Events during Business Process Monitoring by Means of Compliance Rules and Model-Based Diagnosis

María Teresa Gómez-López; Rafael M. Gasca; Stefanie Rinderle-Ma

Sometimes the business process model is not known completely, but a set of compliance rules can be used to describe the ordering and temporal relations between activities, incompatibilities, and existence dependencies in the process. The analysis of these compliance rules and the temporal events thrown during the execution of an instance, can be used to detect and diagnose a process behaviour that does not satisfy the expected behaviour. We propose to combine model-based diagnosis and constraint programming for the compliance violation analysis. This combination facilitates the diagnosis of discrepancies between the compliance rules and the events that the process generates as well as enables us to propose correct event time intervals to satisfy the compliance rules.


Information & Software Technology | 2016

Hybrid business process modeling for the optimization of outcome data

Luisa Parody; María Teresa Gómez-López; Rafael M. Gasca

We propose the formalization of a hybrid model oriented towards obtaining the outcome data optimization, by combining a data-oriented declarative specification and a control-flow-oriented imperative specification.We propose the automatic creation from this hybrid model of an imperative model that is executable in a standard Business Process Management System.We defined an approach, based on the definition of a hybrid business process, which uses a constraint programming paradigm. For that, we define:the hybrid business process for the specification of the relationships between declarative data and control-flow imperative components of a business process.The automatic creation of an entirely imperative model at design time thanks to the use of Constraint Programming paradigm.The resulting imperative model is executable in any commercial Business Process Management System, and obtains, at execution time, the optimized outcome data of the process. Context: Declarative business processes are commonly used to describe permitted and prohibited actions in a business process. However, most current proposals of declarative languages fail in three aspects: (1) they tend to be oriented only towards the execution order of the activities; (2) the optimization is oriented only towards the minimization of the execution time or the resources used in the business process; and (3) there is an absence of capacity of execution of declarative models in commercial Business Process Management Systems.Objective: This contribution aims at taking into account these three aspects, by means of: (1) the formalization of a hybrid model oriented towards obtaining the outcome data optimization by combining a data-oriented declarative specification and a control-flow-oriented imperative specification; and (2) the automatic creation from this hybrid model to an imperative model that is executable in a standard Business Process Management System.Method: An approach, based on the definition of a hybrid business process, which uses a constraint programming paradigm, is presented. This approach enables the optimized outcome data to be obtained at runtime for the various instances.Results: A language capable of defining a hybrid model is provided, and applied to a case study. Likewise, the automatic creation of an executable constraint satisfaction problem is addressed, whose resolution allows us to attain the optimized outcome data. A brief computational study is also shown.Conclusion: A hybrid business process is defined for the specification of the relationships between declarative data and control-flow imperative components of a business process. In addition, the way in which this hybrid model automatically creates an entirely imperative model at design time is also defined. The resulting imperative model, executable in any commercial Business Process Management System, can obtain, at execution time, the optimized outcome data of the process.

Collaboration


Dive into the María Teresa Gómez-López's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge