Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Larisa Shwartz is active.

Publication


Featured researches published by Larisa Shwartz.


international conference on cloud computing | 2010

Workload Migration into Clouds Challenges, Experiences, Opportunities

Christopher Ward; N. Aravamudan; Kamal Bhattacharya; Karen Cheng; Robert Filepp; Robert D. Kearney; Brian Peterson; Larisa Shwartz; Christopher C. Young

The steady drumbeat of Cloud as a disruptive influence for Infrastructure Service Providers (ISP’s) and the enablement vehicle for Software As A Service (SAAS)providers can be heard loud and clear in the industry today. In fact, Cloud is probably at the peak of the hype curve, and already there are identified challenges associated with effective deployment for business critical applications (so called Production Applications) in mature enterprises. One of these challenges is the smooth migration of workload from the previous environment to the new cloud enabled environment in a cost effective way, with minimal disruption and risk. In this paper we introduce extensions to an integrated automation capability called the Darwin framework that enables workload migration for this scenario and discuss the impact that automated migration has on the cost and risks normally associated with migration to clouds.


ieee international conference on services computing | 2007

IT service management automation - A hybrid methodology to integrate and orchestrate collaborative human centric and automation centric workflows

Naga A. Ayachitula; Melissa J. Buco; Yixin Diao; Surendra Maheswaran; Raju Pavuluri; Larisa Shwartz; Christopher Ward

People, processes, technology and information are the essential building blocks for creating a successful IT infrastructure in todays fast-paced, service-focused marketplace. ITIL which is recognized as the de facto standard for service management is a process based approach. ITIL focuses on a set of integrated processes which run the gamut from highly interactive and dynamic processes such as problem determination to highly repeatable processes such as patch deployment which are best handled in a fully automated, non-interactive fashion. The ability to support and integrate the full spectrum of interactivity for these processes with the appropriate level of automation is crucial for the service provider. Also key is the ability to identify opportunities to increase the level of automation as maturity and technology permit. In this paper, we propose a conceptual methodology for IT service management process automation that leverages the ontological relationships between process artifacts and resource artifacts to develop data aware processes for an effective automated approach to integrate both highly automated and human centric process models. The objective is to develop a systematic approach that addresses the needs of an IT organization in order that highly automated operational processes work in conjunction with collaborative human decision centric processes in order to effectively deliver IT services. In addition, we propose a complexity model to assist in identifying automation opportunities to satisfy the need for continuous efficiency and cost improvement.


service-oriented computing and applications | 2010

Image selection as a service for cloud computing environments

Robert Filepp; Larisa Shwartz; Christopher Ward; Robert D. Kearney; Karen Cheng; Christopher C. Young; Yanal Ghosheh

Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe that as the number of images proliferates it will become increasingly difficult for users to decide effectively. Cloud service providers often allow their customers to instantiate standard system images, to modify their instances, and to store images of these customized instances for public or private future use. Storing modified instances as images enables customers to avoid re-provisioning and re-configuration of required resources thereby reducing their future costs. However Cloud service providers generally do not expose details regarding the configurations of the images in a rigorous canonical fashion nor offer services that assist clients in the best target image selection to support client transformation objectives. Rather, they allow customers to enter a free-form description of an image based on clients best effort. This means in order to find a “best fit” image to instantiate, a human user must review potentially thousands of image descriptions, reading each description to evaluate its suitability as a platform to host their source application. Furthermore, the actual content of the selected image may differ greatly from its description. Finally, even images that have been customized and retained for future use may need additional provisioning and customization to accommodate specific needs. In this paper we propose a service that accumulates image configuration details in a canonical fashion and a further service that employs an algorithm to order images per best fit /least cost in conformance to user-specified policies. These services collectively facilitate workload transformation into enterprise cloud environments.


congress on evolutionary computation | 2007

Management of Service Process QoS in a Service Provider - Service Supplier Environment

Genady Grabarnik; Heiko Ludwig; Larisa Shwartz

IT service providers typically must comply with service level agreements that are part of their usage contracts with customers. Not only IT infrastructure is subject to service level guarantees such as availability or response time but also service management processes as defined by the IT infrastructure library (ITIL) such as change and incident processes and the fulfillment of service requests. SLAs relating to service management processes typically address metrics such as initial response time and fulfillment time. Large service providers have the choice of which internal service delivery team or external service provider they assign to atomic processs of a service process, each of which having different costs or prices associated with it for different turn-around times at different risk. This choice in QoS of different service providers can be used to manage the trade-off between penalty costs and fulfillment cost. This paper proposes a model as a basis for service provider choice at process request time. This model can be used to reduce total service costs of IT service providers using alternative delivery teams and external service providers.


network operations and management symposium | 2012

Optimizing system monitoring configurations for non-actionable alerts

Liang Tang; Tao Li; Florian Pinel; Larisa Shwartz; Genady Grabarnik

Todays competitive business climate and the complexity of IT environments dictate efficient and cost effective service delivery and support of IT services. This is largely achieved through automating of routine maintenance procedures including problem detection, determination and resolution. System monitoring provides effective and reliable means for problem detection. Coupled with automated ticket creation, it ensures that a degradation of the vital signs, defined by acceptable thresholds or monitoring conditions, is flagged as a problem candidate and sent to supporting personnel as an incident ticket. This paper describes a novel methodology and a system for minimizing non-actionable tickets while preserving all tickets which require corrective action. Our proposed method defines monitoring conditions and the optimal corresponding delay times based on an off-line analysis of historical alerts and the matching incident tickets. Potential monitoring conditions are built on a set of predictive rules which are automatically generated by a rule-based learning algorithm with coverage, confidence and rule complexity criteria. These conditions and delay times are propagated as configurations into run-time monitoring systems.


knowledge discovery and data mining | 2013

An integrated framework for optimizing automatic monitoring systems in large IT infrastructures

Liang Tang; Tao Li; Larisa Shwartz; Florian Pinel; Genady Grabarnik

The competitive business climate and the complexity of IT environments dictate efficient and cost-effective service delivery and support of IT services. These are largely achieved by automating routine maintenance procedures, including problem detection, determination and resolution. System monitoring provides an effective and reliable means for problem detection. Coupled with automated ticket creation, it ensures that a degradation of the vital signs, defined by acceptable thresholds or monitoring conditions, is flagged as a problem candidate and sent to supporting personnel as an incident ticket. This paper describes an integrated framework for minimizing false positive tickets and maximizing the monitoring coverage for system faults. In particular, the integrated framework defines monitoring conditions and the optimal corresponding delay times based on an off-line analysis of historical alerts and incident tickets. Potential monitoring conditions are built on a set of predictive rules which are automatically generated by a rule-based learning algorithm with coverage, confidence and rule complexity criteria. These conditions and delay times are propagated as configurations into run-time monitoring systems. Moreover, a part of misconfigured monitoring conditions can be corrected according to false negative tickets that are discovered by another text classification algorithm in this framework. This paper also provides implementation details of a program product that uses this framework and shows some illustrative examples of successful results.


integrated network management | 2009

Towards an optimized model of incident ticket correlation

Patricia Marcu; Genady Grabarnik; Laura Z. Luan; Daniela Rosu; Larisa Shwartz; Christopher Ward

In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident and Problem Management are two of the Service Operation processes in the IT Infrastructure Library (ITIL). These two processes aim to recognize, log, isolate and correct errors which occur in the environment and disrupt the delivery of services. Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket Systems (ITS).


network operations and management symposium | 2014

Hierarchical multi-label classification over ticket data using contextual loss

Chunqiu Zeng; Tao Li; Larisa Shwartz; Genady Grabarnik

Maximal automation of routine IT maintenance procedures is an ultimate goal of IT service management. System monitoring, an effective and reliable means for IT problem detection, generates monitoring tickets to be processed by system administrators. IT problems are naturally organized in a hierarchy by specialization. The problem hierarchy is used to help triage tickets to the processing team for problem resolving. In this paper, a hierarchical multi-label classification method is proposed to classify the monitoring tickets by utilizing the problem hierarchy. In order to find the most effective classification, a novel contextual hierarchy (CH) loss is introduced in accordance with the problem hierarchy. Consequently, an arising optimization problem is solved by a new greedy algorithm. An extensive empirical study over ticket data was conducted to validate the effectiveness and efficiency of our method.


knowledge discovery and data mining | 2012

Discovering lag intervals for temporal dependencies

Liang Tang; Tao Li; Larisa Shwartz

Time lag is a key feature of hidden temporal dependencies within sequential data. In many real-world applications, time lag plays an essential role in interpreting the cause of discovered temporal dependencies. Traditional temporal mining methods either use a predefined time window to analyze the item sequence, or employ statistical techniques to simply derive the time dependencies among items. Such paradigms cannot effectively handle varied data with special properties, e.g., the interleaved temporal dependencies. In this paper, we study the problem of finding lag intervals for temporal dependency analysis. We first investigate the correlations between the temporal dependencies and other temporal patterns, and then propose a generalized framework to resolve the problem. By utilizing the sorted table in representing time lags among items, the proposed algorithm achieves an elegant balance between the time cost and the space cost. Extensive empirical evaluation on both synthetic and real data sets demonstrates the efficiency and effectiveness of our proposed algorithm in finding the temporal dependencies with lag intervals in sequential data.


integrated network management | 2015

Resolution recommendation for event tickets in service management

Wubai Zhou; Liang Tang; Tao Li; Larisa Shwartz; Genady Grabarnik

In recent years, IT Service Providers have been rapidly transforming to an automated service delivery model. This is due to advances in technology and driven by the unrelenting market pressure to reduce cost and maintain quality. Tremendous progress has been made to date towards attainment of truly automated service delivery; that is, the ability to deliver the same service automatically using the same process with the same quality. However, automating Incident and Problem Management continuous to be a difficult problem, particularly due to the growing complexity of IT environments. Software monitoring systems are designed to actively collect and signal event occurrances and, when necessary, automatically generate incident tickets. Repeating events generate similar tickets, which in turn have a vast number of repeated problem resolutions likely to be found in earlier tickets. In this paper we find an appropriate resolution by making use of similarities between the events and previous resolutions of similar events. Traditional KNN (K Nearest Neighbor) algorithm has been used to recommend resolutions for incoming tickets. However, the effectiveness of recommendation heavily relies on the underlying similarity measure in KNN. In this paper, we significantly improve the similarity measure used in KNN by utilizing both the event and resolution information in historical tickets via a topic-level feature extraction using the LDA (Latent Dirichlet Allocation) model. In addition, when resolution categories are available, we propose to learn a more effective similarity measure using metric learning. Extensive empirical evaluations on three ticket data sets demonstrate the effectiveness and efficiency of our proposed methods.

Collaboration


Dive into the Larisa Shwartz's collaboration.

Researchain Logo
Decentralizing Knowledge