Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hamid Reza Motahari-Nezhad.
IEEE Internet Computing | 2013
Mohammad Allahbakhsh; Boualem Benatallah; Aleksandar Ignjatovic; Hamid Reza Motahari-Nezhad; Elisa Bertino; Schahram Dustdar
As a new distributed computing model, crowdsourcing lets people leverage the crowds intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail.
very large data bases | 2011
Hamid Reza Motahari-Nezhad; Regis Saint-Paul; Fabio Casati; Boualem Benatallah
Understanding, analyzing, and ultimately improving business processes is a goal of enterprises today. These tasks are challenging as business processes in modern enterprises are implemented over several applications and Web services, and the information about process execution is scattered across several data sources. Understanding modern business processes entails identifying the correlation between events in data sources in the context of business processes (event correlation is the process of finding relationships between events that belong to the same process execution instance). In this paper, we investigate the problem of event correlation for business processes that are realized through the interactions of a set of Web services. We identify various ways in which process-related events could be correlated as well as investigate the problem of discovering event correlation (semi-) automatically from service interaction logs. We introduce the concept of process view to represent the process resulting from a certain way of event correlation and that of process space referring to the set of possible process views over process events. Event correlation is a challenging problem as there are various ways in which process events could be correlated, and in many cases, it is subjective. Exploring all the possibilities of correlations is computationally expensive, and only some of the correlated event sets result in process views that are interesting. We propose efficient algorithms and heuristics to identify correlated event sets that lead potentially to interesting process views. To account for its subjectivity, we have designed the event correlation discovery process to be interactive and enable users to guide it toward process views of their interest and organize the discovered process views into a process map that allows users to effectively navigate through the process space and identify the ones of interest. We report on experiments performed on both synthetic and real-world datasets that show the viability and efficiency of the approach.
IEEE Transactions on Services Computing | 2009
Woralak Kongdenfha; Hamid Reza Motahari-Nezhad; Boualem Benatallah; Fabio Casati; Regis Saint-Paul
Standardization in Web services simplifies integration. However, it does not remove the need for adapters due to possible heterogeneity among service interfaces and protocols. In this paper, we characterize the problem of Web services adaptation focusing on business interfaces and protocols adapters. Our study shows that many of the differences between business interfaces and protocols are recurring. We introduce mismatch patterns to capture these recurring differences and to provide solutions to resolve them. We leverage mismatch patterns for service adaptation with two approaches: by developing stand-alone adapters and via service modification. We then dig into the notion of adaptation aspects that, following aspect-oriented programming paradigm and service modification approach, allow for rapid development of adapters. We present a study showing that it is a preferable approach in many cases. The proposed approach is implemented in a proof-of-concept prototype tool, and evaluated using both qualitative and quantitative methods.
business process management | 2011
Seyed-Mehdi-Reza Beheshti; Boualem Benatallah; Hamid Reza Motahari-Nezhad; Sherif Sakr
The execution of a business process (BP) in todays enterprises may involve a workflow and multiple IT systems and services. Often no complete, up-to-date documentation of the model or correlation information of process events exist. Understanding the execution of a BP in terms of its scope and details is challenging specially as it is subjective: depends on the perspective of the person looking at BP execution. We present a framework, simple abstractions and a language for the explorative querying and understanding of BP execution from various user perspectives. We propose a query language for analyzing event logs of process-related systems based on the two concepts of folders and paths, which enable an analyst to group related events in the logs or find paths among events. Folders and paths can be stored to be used in follow-on analysis. We have implemented the proposed techniques and the language, FPSPARQL, by extending SPARQL graph query language. We present the evaluation results on the performance and the quality of the results using a number of process event logs.
IEEE Transactions on Knowledge and Data Engineering | 2008
Hamid Reza Motahari-Nezhad; Regis Saint-Paul; Boualem Benatallah; Fabio Casati
Understanding the business (interaction) protocol supported by a service is very important for both clients and service providers: it allows developers to know how to write clients that interact with a service, and it allows development tools and runtime middleware to deliver functionality that simplifies the service development lifecycle. It also greatly facilitates the monitoring, visualization, and aggregation of interaction data. This paper presents an approach for discovering protocol definitions from real-world service interaction logs. It first describes the challenges in protocol discovery in such a context. Then, it presents a novel discovery algorithm, which is widely applicable, robust to different kinds of imperfections often present in realworld service logs, and able to derive protocols of small sizes, also thanks to heuristics. As finding the most precise and the smallest model is algorithmically not feasible from imperfect service logs, finally, the paper presents an approach to refine the discovered protocol via user interaction, to compensate for possible imprecision introduced in the discovered model. The approach has been implemented and experimental results show its viability on both synthetic and real-world datasets.
ieee conference on business informatics | 2013
Hamid Reza Motahari-Nezhad; Keith D. Swenson
Case management refers to the coordination of work that is not routine and predictable, and requires human judgment. Case management has applications in many domains such as healthcare, legal, police detective, social work, etc. The common aspect of such domains is that the work procedure cannot be prescribed into machine programs, instead the work is highly variable and must be figured out by knowledge workers each time. They might start with high-level guidelines and frameworks, but the sensitive dependence upon the details of the case mean that the work patterns emerge from the case as more information becomes available. Knowledge workers must make decisions on the course of action as the case proceeds. Traditionally case management has been supported by custom-built applications for each domain. There are approaches that attempt to standardize work practices without appreciating the full range of required responses. There is a push in industry from different vendors in areas such as enterprise content management, customer relationship management and business process management also to position their products as case management applications. In this article, we will review trends in industry and selected work in academia in the case management space, to identify challenges that the industry and the research community are facing in supporting knowledge workers in an adaptive and flexible manner, where systems need to support the work while should keep the knowledge workers in control.
international conference on data engineering | 2007
Hamid Reza Motahari-Nezhad; Regis Saint-Paul; Boualem Benatallah; Fabio Casati
This paper deals with the problem of discovering protocol models by analyzing real-world interaction logs. There are several scenarios where protocol discovery is useful and needed: (i) In practice, the protocol definition may not be available. This can happen for many reasons, e.g., the service has been developed using a bottom-up approach, by simply SOAP-ifying an existing application; (ii) even when the protocol model is available, protocol discovery is important as we may want to verify if the designed protocol model is what is actually being supported by the implementation and, if not, what are the differences. An instance of this problem involves discovering if the service is compliant with the protocol specification required by some domain-specific standardization body or industry consortium.
IEEE Computer | 2009
Halvard Skogsrud; Hamid Reza Motahari-Nezhad; Boualem Benatallah; Fabio Casati
As Web services become more widely adopted, developers must cope with the complexity of evolving trust negotiation policies spanning numerous autonomous services. The Trust-Serv framework uses a state-machine-based modeling approach that supports life-cycle policy management and automated enforcement.
international conference on service oriented computing | 2009
Andre R. R. Souza; Bruno Silva; Fernando Antônio Aires Lins; Julio Cesar Damasceno; Nelson Souto Rosa; Paulo Romero Martins Maciel; Robson W. A. Medeiros; Bryan Stephenson; Hamid Reza Motahari-Nezhad; Jun Li; Caio Northfleet
Despite an increasing need for considering security requirements in service composition, the incorporation of security requirements into service composition is still a challenge for many reasons: no clear identification of security requirements for composition, absence of notations to express them, difficulty in integrating them into the business processes, complexity of mapping them into security mechanisms, and the complexity inherent to specify and enforce complex security requirements. We identify security requirements for service composition and define notations to express them at different levels of abstraction. We present a novel approach consisting of a methodology, called Sec-MoSC, to incorporate security requirements into service composition, map security requirements into enforceable mechanisms, and support execution. We have implemented this approach in a prototype tool by extending BPMN notation and building on an existing BPMN editor, BPEL engine and Apache Rampart. We showcase an illustrative application of the Sec-MoSC toolset.
Distributed and Parallel Databases | 2016
Seyed-Mehdi-Reza Beheshti; Boualem Benatallah; Hamid Reza Motahari-Nezhad
In today’s knowledge-, service-, and cloud-based economy, businesses accumulate massive amounts of data from a variety of sources. In order to understand businesses one may need to perform considerable analytics over large hybrid collections of heterogeneous and partially unstructured data that is captured related to the process execution. This data, usually modeled as graphs, increasingly come to show all the typical properties of big data: wide physical distribution, diversity of formats, non-standard data models, independently-managed and heterogeneous semantics. We use the term big process graph to refer to such large hybrid collections of heterogeneous and partially unstructured process related execution data. Online analytical processing (OLAP) of big process graph is challenging as the extension of existing OLAP techniques to analysis of graphs is not straightforward. Moreover, process data analysis methods should be capable of processing and querying large amount of data effectively and efficiently, and therefore have to be able to scale well with the infrastructure’s scale. While traditional analytics solutions (relational DBs, data warehouses and OLAP), do a great job in collecting data and providing answers on known questions, key business insights remain hidden in the interactions among objects: it will be hard to discover concept hierarchies for entities based on both data objects and their interactions in process graphs. In this paper, we introduce a framework and a set of methods to support scalable graph-based OLAP analytics over process execution data. The goal is to facilitate the analytics over big process graph through summarizing the process graph and providing multiple views at different granularity. To achieve this goal, we present a model for process OLAP (P-OLAP) and define OLAP specific abstractions in process context such as process cubes, dimensions, and cells. We present a MapReduce-based graph processing engine, to support big data analytics over process graphs. We have implemented the P-OLAP framework and integrated it into our existing process data analytics platform, ProcessAtlas, which introduces a scalable architecture for querying, exploration and analysis of large process data. We report on experiments performed on both synthetic and real-world datasets that show the viability and efficiency of the approach.