Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arik Senderovich is active.

Publication


Featured researches published by Arik Senderovich.


conference on advanced information systems engineering | 2014

Queue Mining – Predicting Delays in Service Processes

Arik Senderovich; Matthias Weidlich; Avigdor Gal; Avishai Mandelbaum

Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay.


Information Systems | 2015

Queue mining for delay prediction in multi-class service processes

Arik Senderovich; Matthias Weidlich; Avigdor Gal; Avishai Mandelbaum

Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Information recorded by systems during the operation of these processes provides an angle for operational process analysis, commonly referred to as process mining. In this work, we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We present predictors that treat queues as first-class citizens and either enhance existing regression-based techniques for process mining or are directly grounded in queueing theory. In particular, our predictors target multi-class service processes, in which requests are classified by a type that influences their processing. Further, we introduce queue mining techniques that derive the predictors from event logs recorded by an information system during process execution. Our evaluation based on large real-world datasets, from the telecommunications and financial sectors, shows that our techniques yield accurate online predictions of case delay and drastically improve over predictors neglecting the queueing perspective.


conference on advanced information systems engineering | 2016

The ROAD from Sensor Data to Process Instances via Interaction Mining

Arik Senderovich; Andreas Rogge-Solti; Avigdor Gal; Jan Mendling; Avishai Mandelbaum

Process mining is a rapidly developing field that aims at automated modeling of business processes based on data coming from event logs. In recent years, advances in tracking technologies, e.g., Real-Time Locating Systems (RTLS), put forward the ability to log business process events as location sensor data. To apply process mining techniques to such sensor data, one needs to overcome an abstraction gap, because location data recordings do not relate to the process directly. In this work, we solve the problem of mapping sensor data to event logs based on process knowledge. Specifically, we propose interactions as an intermediate knowledge layer between the sensor data and the event log. We solve the mapping problem via optimal matching between interactions and process instances. An empirical evaluation of our approach shows its feasibility and provides insights into the relation between ambiguities and deviations from process knowledge, and accuracy of the resulting event log.


business process management | 2014

Mining Resource Scheduling Protocols

Arik Senderovich; Matthias Weidlich; Avigdor Gal; Avishai Mandelbaum

In service processes, as found in the telecommunications, financial, or healthcare sector, customers compete for the scarce capacity of service providers. For such processes, performance analysis is important and it often targets the time that customers are delayed prior to service. However, this wait time cannot be fully explained by the load imposed on service providers. Indeed, it also depends on resource scheduling protocols, which determine the order of activities that a service provider decides to follow when serving customers. This work focuses on automatically learning resource decisions from events. We hypothesize that queueing information serves as an essential element in mining such protocols and hence, we utilize the queueing perspective of customers in the mining process. We propose two types of mining techniques: advanced classification methods from data mining that include queueing information in their explanatory features and heuristics that originate in queueing theory. Empirical evaluation shows that incorporating the queueing perspective into mining of scheduling protocols improves predictive power.


business process management | 2017

Intra and Inter-case Features in Predictive Process Monitoring: A Tale of Two Dimensions

Arik Senderovich; Chiara Di Francescomarino; Chiara Ghidini; Kerwin Jorbina; Fabrizio Maria Maggi

Predictive process monitoring is concerned with predicting measures of interest for a running case (e.g., a business outcome or the remaining time) based on historical event logs. Most of the current predictive process monitoring approaches only consider intra-case information that comes from the case whose measures of interest one wishes to predict. However, in many systems, the outcome of a running case depends on the interplay of all cases that are being executed concurrently. For example, in many situations, running cases compete over scarce resources. In this paper, following standard predictive process monitoring approaches, we employ supervised machine learning for prediction. In particular, we present a method for feature encoding of process cases that relies on a bi-dimensional state space representation: the first dimension corresponds to intra-case dependencies, while the second dimension reflects inter-case dependencies to represent shared information among running cases. The inter-case encoding derives features based on the notion of case types that can be used to partition the event log into clusters of cases that share common characteristics. To demonstrate the usefulness and applicability of the method, we evaluated it against two real-life datasets coming from an Israeli emergency department process, and an open dataset of a manufacturing process.


Information Systems | 2016

Conformance checking and performance improvement in scheduled processes

Arik Senderovich; Matthias Weidlich; Liron Yedidsion; Avigdor Gal; Avishai Mandelbaum; Sarah Kadish; Craig A. Bunnell

Service processes, for example in transportation, telecommunications or the health sector, are the backbone of todays economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%. HighlightsWe provide a broad extension to the notions of conceptual, operational and continuous conformance checking techniques.We present novel theoretical results in terms of scheduling algorithms for Fork/Join networks that re-sequence arriving cases. We prove that these scheduling algorithms are guaranteed to never perform worse than the baseline approach.We test the new techniques for process improvement on real-world data, and show that the proposed algorithms yield a 20%-40% improvement in median flow time and tardiness.


business process management | 2015

Discovering Queues from Event Logs with Varying Levels of Information

Arik Senderovich; S.J.J. Leemans; Shahar Harel; Avigdor Gal; Avishai Mandelbaum; Wil M. P. van der Aalst

Detecting and measuring resource queues is central to business process optimization. Queue mining techniques allow for the identification of bottlenecks and other process inefficiencies, based on event data. This work focuses on the discovery of resource queues. In particular, we investigate the impact of available information in an event log on the ability to accurately discover queue lengths, i.e. the number of cases waiting for an activity. Full queueing information, i.e. timestamps of enqueueing and exiting the queue, makes queue discovery trivial. However, often we see only the completions of activities. Therefore, we focus our analysis on logs with partial information, such as missing enqueueing times or missing both enqueueing and service start times. The proposed discovery algorithms handle concurrency and make use of statistical methods for discovering queues under this uncertainty. We evaluate the techniques using real-life event logs. A thorough analysis of the empirical results provides insights into the influence of information levels in the log on the accuracy of the measurements.


business process management | 2015

Data-Driven Performance Analysis of Scheduled Processes

Arik Senderovich; Andreas Rogge-Solti; Avigdor Gal; Jan Mendling; Avishai Mandelbaum; Sarah Kadish; Craig A. Bunnell

The performance of scheduled business processes is of central importance for services and manufacturing systems. However, current techniques for performance analysis do not take both queueing semantics and the process perspective into account. In this work, we address this gap by developing a novel method for utilizing rich process logs to analyze performance of scheduled processes. The proposed method combines simulation, queueing analytics, and statistical methods. At the heart of our approach is the discovery of an individual-case model from data, based on an extension of the Colored Petri Nets formalism. The resulting model can be simulated to answer performance queries, yet it is computational inefficient. To reduce the computational cost, the discovered model is projected into Queueing Networks, a formalism that enables efficient performance analytics. The projection is facilitated by a sequence of folding operations that alter the structure and dynamics of the Petri Net model. We evaluate the approach with a real-world dataset from Dana-Farber Cancer Institute, a large outpatient cancer hospital in the United States.


business process management | 2016

In Log and Model We Trust? A Generalized Conformance Checking Framework

Andreas Rogge-Solti; Arik Senderovich; Matthias Weidlich; Jan Mendling; Avigdor Gal

While models and event logs are readily available in modern organizations, their quality can seldom be trusted. Raw event recordings are often noisy, incomplete, and contain erroneous recordings. The quality of process models, both conceptual and data-driven, heavily depends on the inputs and parameters that shape these models, such as domain expertise of the modelers and the quality of execution data. The mentioned quality issues are specifically a challenge for conformance checking. Conformance checking is the process mining task that aims at coping with low model or log quality by comparing the model against the corresponding log, or vice versa. The prevalent assumption in the literature is that at least one of the two can be fully trusted. In this work, we propose a generalized conformance checking framework that caters for the common case, when one does neither fully trust the log nor the model. In our experiments we show that our proposed framework balances the trust in model and log as a generalization of state-of-the-art conformance checking techniques.


business process management | 2016

P

Arik Senderovich; Alexander Shleyfman; Matthias Weidlich; Avigdor Gal; Avishai Mandelbaum

Operational process models such as generalised stochastic Petri nets (GSPNs) are useful when answering performance queries on business processes (e.g. ‘how long will it take for a case to finish?’). Recently, methods for process mining have been developed to discover and enrich operational models based on a log of recorded executions of processes, which enables evidence-based process analysis. To avoid a bias due to infrequent execution paths, discovery algorithms strive for a balance between over-fitting and under-fitting regarding the originating log. However, state-of-the-art discovery algorithms address this balance solely for the control-flow dimension, neglecting possible over-fitting in terms of performance annotations. In this work, we thus offer a technique for performance-driven model reduction of GSPNs, using structural simplification rules. Each rule induces an error in performance estimates with respect to the original model. However, we show that this error is bounded and that the reduction in model parameters incurred by the simplification rules increases the accuracy of process performance prediction. We further show how to find an optimal sequence of applying simplification rules to obtain a minimal model under a given error budget for the performance estimates. We evaluate the approach with a real-world case in the healthcare domain, showing that model simplification indeed yields significant improvements in time prediction accuracy.

Collaboration


Dive into the Arik Senderovich's collaboration.

Top Co-Authors

Avatar

Avigdor Gal

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Weidlich

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Avishai Mandelbaum

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

François Schnitzler

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Rogge-Solti

Vienna University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar

Jan Mendling

Vienna University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Shleyfman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Liron Yedidsion

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge