Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Segev Wasserkrug is active.

Publication


Featured researches published by Segev Wasserkrug.


international conference on autonomic computing | 2004

Autonomic self-optimization according to business objectives

Sarel Aiber; Dagan Gilat; Ariel Landau; Natalia Razinkov; Aviad Sela; Segev Wasserkrug

A central challenge in the runtime management of computing environments is the necessity to keep these environments continuously optimized. In this paper we introduce a new paradigm, which focuses on self-optimization according to high-level business objectives such as maximizing revenues. It replaces the more traditional optimizations that are based upon IT measures such as resource availability. A general, autonomous process is defined to enable such optimizations, and a set of technologies and methodologies is introduced to support the implementation of such a process. The paper concludes with two types of validation tests carried out on an eCommerce site, that demonstrate the value and applicability of this approach.


distributed event-based systems | 2008

Complex event processing over uncertain data

Segev Wasserkrug; Avigdor Gal; Opher Etzion; Yulia Turchin

In recent years, there has been a growing need for active systems that can react automatically to events. Some events are generated externally and deliver data across distributed systems, while others are materialized by the active system itself. Event materialization is hampered by uncertainty that may be attributed to unreliable data sources and networks, or the inability to determine with certainty whether an event has actually occurred. Two main obstacles exist when designing a solution to the problem of event materialization with uncertainty. First, event materialization should be performed efficiently, at times under a heavy load of incoming events from various sources. The second challenge involves the generation of a correct probability space, given uncertain events. We present a solution to both problems by introducing an efficient mechanism for event materialization under uncertainty. A model for representing materialized events is presented and two algorithms for correctly specifying the probability space of an event history are given. The first provides an accurate, albeit expensive method based on the construction of a Bayesian network. The second is a Monte Carlo sampling algorithm that heuristically assesses materialized event probabilities. We experimented with both the Bayesian network and the sampling algorithms, showing the latter to be scalable under an increasing rate of explicit event delivery and an increasing number of uncertain rules (while the former is not). Finally, our sampling algorithm accurately and efficiently estimates the probability space.


IEEE Transactions on Knowledge and Data Engineering | 2012

Efficient Processing of Uncertain Events in Rule-Based Systems

Segev Wasserkrug; Avigdor Gal; Opher Etzion; Yulia Turchin

There is a growing need for systems that react automatically to events. While some events are generated externally and deliver data across distributed systems, others need to be derived by the system itself based on available information. Event derivation is hampered by uncertainty attributed to causes such as unreliable data sources or the inability to determine with certainty whether an event has actually occurred, given available information. Two main challenges exist when designing a solution for event derivation under uncertainty. First, event derivation should scale under heavy loads of incoming events. Second, the associated probabilities must be correctly captured and represented. We present a solution to both problems by introducing a novel generic and formal mechanism and framework for managing event derivation under uncertainty. We also provide empirical evidence demonstrating the scalability and accuracy of our approach.


ieee international conference on e technology e commerce and e service | 2004

e-CLV: a modelling approach for customer lifetime evaluation in e-commerce domains, with an application and case study for online auctions

Opher Etzion; Amit Fisher; Segev Wasserkrug

Abstracte-Commerce companies acknowledge that customers are their most important asset and that it is imperative to estimate the potential value of this asset.In conventional marketing, one of the widely accepted methods for evaluating customer value uses models known as Customer Lifetime Value (CLV). However, these existing models suffer from two major shortcomings: They either do not take into account significant attributes of customer behavior unique to e-Commerce, or they do not provide a method for generating specific models from the large body of relevant historical data that can be easily collected in e-Commerce sites.This paper describes a general modeling approach, based on Markov Chain Models, for calculating customer value in the e-Commerce domain. This approach extends existing CLV models, by taking into account a new set of variables required for evaluating customers value in an e-Commerce environment. In addition, we describe how data-mining algorithms can aid in deriving such a model, thereby taking advantage of the historical customer data available in such environments. We then present an application of this modeling approach—the creation of a model for online auctions—one of the fastest-growing and most lucrative types of e-Commerce. The article also describes a case study, which demonstrates how our model provides more accurate predictions than existing conventional CLV models regarding the future income generated by customers.


ACM Transactions on Modeling and Computer Simulation | 2010

Simulation optimization using the cross-entropy method with optimal computing budget allocation

Donghai He; Loo Hay Lee; Chun-Hung Chen; Michael C. Fu; Segev Wasserkrug

We propose to improve the efficiency of simulation optimization by integrating the notion of optimal computing budget allocation into the Cross-Entropy (CE) method, which is a global optimization search approach that iteratively updates a parameterized distribution from which candidate solutions are generated. This article focuses on continuous optimization problems. In the stochastic simulation setting where replications are expensive but noise in the objective function estimate could mislead the search process, the allocation of simulation replications can make a significant difference in the performance of such global optimization search algorithms. A new allocation scheme is developed based on the notion of optimal computing budget allocation. The proposed approach improves the updating of the sampling distribution by carrying out this computing budget allocation in an efficient manner, by minimizing the expected mean-squared error of the CE weight function. Numerical experiments indicate that the computational efficiency of the CE method can be substantially improved if the ideas of computing budget allocation are applied.


distributed event-based systems | 2009

Tuning complex event processing rules using the prediction-correction paradigm

Yulia Turchin; Avigdor Gal; Segev Wasserkrug

There is a growing need for the use of active systems, systems that act automatically based on events. In many cases, providing such active functionality requires materializing (inferring) the occurrence of relevant events. A widespread paradigm for enabling such materialization is Complex Event Processing (CEP), a rule based paradigm, which currently relies on domain experts to fully define the relevant rules. These experts need to provide the set of basic events which serves as input to the rule, their inter-relationships, and the parameters of the events for determining a new event materialization. While it is reasonable to expect that domain experts will be able to provide a partial rules specification, providing all the required details is a hard task, even for domain experts. Moreover, in many active systems, rules may change over time, due to the dynamic nature of the domain. Such changes complicate even further the specification task, as the expert must constantly update the rules. As a result, we seek additional support to the definition of rules, beyond expert opinion. This work presents a mechanism for automating both the initial definition of rules and the update of rules over time. This mechanism combines partial information provided by the domain expert with machine learning techniques, and is aimed at improving the accuracy of event specification and materialization. The proposed mechanism consists of two main repetitive stages, namely rule parameter prediction and rule parameter correction. The former is performed by updating the parameters using an available expert knowledge regarding the future changes of parameters. The latter stage utilizes expert feedback regarding the actual past occurrence of events and the events materialized by the CEP framework to tune rule parameters. We also include possible implementations for both stages, based on a statistical estimator and evaluate our outcome using a case study from the intrusion detection domain.


Queueing Systems | 2009

Waiting and sojourn times in a multi-server queue with mixed priorities

Sergey Zeltyn; Zohar Feldman; Segev Wasserkrug

We consider a multi-server queue with K priority classes. In this system, customers of the P highest priorities (P<K) can preempt customers with lower priorities, ejecting them from service and sending them back into the queue. Service times are assumed exponential with the same mean for all classes.The Laplace–Stieltjes transforms of waiting times are calculated explicitly and the Laplace–Stieltjes transforms of sojourn times are provided in an implicit form via a system of functional equations. In both cases, moments of any order can be easily calculated. Specifically, we provide formulae for the steady state means and the second moments of waiting times for all priority classes. We also study some approximations of sojourn-time distributions via their moments. In a practical part of our paper, we discuss the use of mixed priorities for different types of Service Level Agreements, including an example based on a real scheduling problem of IT support teams.


winter simulation conference | 2009

Toward simulation-based real-time decision-support systems for emergency departments

Yariv N. Marmor; Segev Wasserkrug; Sergey Zeltyn; Yossi Mesika; Ohad Greenshpan; Boaz Carmeli; Avraham Shtub; Avishai Mandelbaum

Emergency Departments (EDs) require advanced support systems for monitoring and controlling their processes: clinical, operational, and financial. A prerequisite for such a system is comprehensive operational information (e.g. queueing times, busy resources,…), reliably portraying and predicting ED status as it evolves in time. To this end, simulation comes to the rescue, through a two-step procedure that is hereby proposed for supporting real-time ED control. In the first step, an ED manager infers the EDs current state, based on historical data and simulation: data is fed into the simulator (e.g. via location-tracking systems, such as RFID tags), and the simulator then completes unobservable state-components. In the second step, and based on the inferred present state, simulation supports control by predicting future ED scenarios. To this end, we estimate time-varying resource requirements via a novel simulation-based technique that utilizes the notion of offered-load.


IEEE Transactions on Knowledge and Data Engineering | 2008

Inference of Security Hazards from Event Composition Based on Incomplete or Uncertain Information

Segev Wasserkrug; Avigdor Gal; Opher Etzion

In many security-related contexts, a quick recognition of security hazards is required. Such recognition is challenging, since available information sources are often insufficient to infer the occurrence of hazards with certainty. This requires that the recognition of security hazard is carried out using inference based on patterns of occurrences distributed over space and time. The two main existing approaches to the inference of security hazards are a) custom-coded solutions, which are tailored to specific patterns, and cannot respond quickly to changes in the patterns of occurrences used for inference, and b) approaches based on direct statistical inferencing techniques, such as regression, which do not enable combining various kinds of evidence regarding the same hazard. In this work, we introduce a more generic formal framework which overcomes the aforementioned deficiencies, together with a case study illustrating the detection of DoS attacks.


International Journal of Services Operations and Informatics | 2008

Creating operational shift schedules for third-level IT support: challenges, models and case study

Segev Wasserkrug; Shai Taub; Sergey Zeltyn; Dagan Gilat; Vladimir Lipets; Zohar Feldman; Avishai Mandelbaum

IT support can be divided into first-level support, second-level support and third-level support. Although there is a large body of existing work regarding demand forecasting and shift schedule creation for various domains such as call centres, very little work exists for second- and third-level IT support. Moreover, there is a significant difference between such support and other types of services. As a result, current best practices for scheduling such work are not based on demand, but rather on primitive rules of thumb. Due to the increasing number of people providing such support, theory and practice is sorely needed for scheduling second- and third-level support shifts according to actual demand. In this work, we present an end-to-end methodology for forecasting and scheduling this type of work. We also present a case study in which this methodology demonstrated significant potential savings in terms of manpower resources.

Collaboration


Dive into the Segev Wasserkrug's collaboration.

Researchain Logo
Decentralizing Knowledge