Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Loewenstern is active.

Publication


Featured researches published by David Loewenstern.


international conference on cloud computing | 2009

Rule-Based Problem Classification in IT Service Management

Yixin Diao; Hani Jamjoom; David Loewenstern

Problem management is a critical and expensive element for delivering IT service management and touches various levels of managed IT infrastructure. While problem management has been mostly reactive, recent work is studying how to leverage large problem ticket information from similar IT infrastructures to probatively predict the onset of problems. Because of the sheer size and complexity of problem tickets, supervised learning algorithms have been the method of choice for problem ticket classification, relying on labeled (or pre-classified) tickets from one managed infrastructure to automatically create signatures for similar infrastructures. However, where there are insufficient preclassified data, leveraging human expertise to develop classification rules can be more efficient. In this paper, we describe a rule-based crowdsourcing approach, where experts can author classification rules and a social networkingbased platform (called xPad) is used to socialize and execute these rules by large practitioner communities. Using real data sets from several large IT delivery centers, we demonstrate that this approach balances between two key criteria: accuracy and cost effectiveness.


Ibm Systems Journal | 2007

Catalog-based service request management

Heiko Ludwig; John Hogan; Rajesh Jaluka; David Loewenstern; Santhosh Kumaran; Allen Moses Gilbert; Arijit Roy; Thirumal Nellutla; Maheswaran Surendra

To manage the delivery of services competitively on a large, global scale, an IT (information technology) service provider must efficiently use service delivery resources-in particular, skilled service delivery teams. Service requests form a large and important component of the management of a clients IT infrastructure. Currently, the fulfillment of IT service requests is often managed on a per-account basis. Service-delivery teams fulfill service requests according to account-specific processes by using an account-specific service-request-management environment, making it difficult to leverage the skills of the various delivery teams for multiple accounts. The service delivery management platform (SDMP) uses reusable service components that can be performed by multiple delivery teams and can be assembled into service compositions to which multiple clients can subscribe. The SDMP catalog is the information repository that manages service components, composition, providers, and subscriptions and is used by the service-request runtime environment to implement specific customer service requests. In this paper, we describe a catalog-based architecture for service delivery management and demonstrate how its use can provide a global service-delivery organization with a platform for achieving significant productivity gains.


ieee international conference on services computing | 2009

Managing Faults in the Service Delivery Process of Service Provider Coalitions

Patricia Marcu; Larisa Shwartz; Genady Grabarnik; David Loewenstern

In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT.Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket System (ITS).As more compound or interdependent services are collaboratively offered by providers, the delivery of a service therefore becomes a responsibility of more than one providers organization. In the ITS systems of various providers seemingly unrelated tickets are created and the connection between them is not realized automatically. The introduction of automation will reduce human involvement and time required for incident resolution.In this paper we consider a collaborative service delivery model that supports both per-request services and continuous high-availability services. In the case of high availability service the information stored in the ITS of the provider often includes information on the outage of a particular service rather than on the failure of a particular request. In this paper we offer an information model that consolidates and supports inter-organizational incident management and probabilistic model for fault discovery.


service-oriented computing and applications | 2010

Quality of IT service delivery — Analysis and framework for human error prevention

Larisa Shwartz; Daniela Rosu; David Loewenstern; Melissa J. Buco; Shang Guo; Rafael Lavrado; Manish Gupta; Venkateswara Reddy Madduri; Jai Kumar Singh

In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the human-error prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.


distributed systems operations and management | 2007

IT service management automation: an automation centric approach leveraging configuration control, audit verification and process analytics

Naga A. Ayachitula; Melissa J. Buco; Yixin Diao; Bradford Austin Fisher; David Loewenstern; Christopher Ward

People, processes, technology and information are the service providers resources for delivering IT services. Process automation is one way in which service providers can reduce cost and improve quality by automating routine tasks thereby reducing human error and reserving people resources for those tasks which require human skill and complex decision making. In this paper we propose a conceptual methodology for IT service management process automation in the area of configuration control, audit verification, and process analytics. We employ a complexity model to assist in identifying the opportunities for process automation. We recommend and outline an automated approach to the complex task of variance detection of the hierarchically defined Configuration Items in a Configuration Management Database (CMDB) against the Configuration Items in the IT environment. We also recommend the integration of this automated detection with human centric remediation for resolving the variances detected and outline an automated approach to the variance detection.


network operations and management symposium | 2012

A learning feature engineering method for task assignment

David Loewenstern; Florian Pinel; Larisa Shwartz; Maira Athanazio de Cerqueira Gatti; Ricardo Herrmann; Victor Fernandes Cavalcante

Multi-domain IT services are delivered by technicians with a variety of expert knowledge in different areas. Their skills and availability are an important property of the service. However, most organizations do not have a consistent view of this information because creation and maintenance of a skill model is a difficult task, especially in light of privacy regulations, changing service catalogs and worker turnover. We propose a method for ranking technicians on their expected performance according to their suitability for receiving the assignment of a service request without maintaining an explicit skill model describing which skills are possessed by each technician. We find appropriate assignees by making use of similarities between the assignees and previous tasks performed by them.


conference on network and service management | 2010

Dispatch tooling for global service delivery

David Loewenstern; Melissa J. Buco; Yixin Diao; Heiko Ludwig; Christopher Ward

Tool development to support service management is a problem in optimizing service quality and reducing redundant work subject to competing constraints, both technical requirements and political realities. However, many of the constraints only become clear over time, and often only after a series of iterations as tooling exposes new or previously discounted constraints. We present a case study: the evolution of tooling for global dispatch of work orders supporting incident tickets. We discuss different approaches, how they highlighted different constraints, and even how they competed with each other, and conclude with a discussion of approaches to balancing constraints, many of which are not initially apparent and some of which shift over time.


international conference on autonomic computing | 2005

PICCIL: Interactive Learning to Support Log File Categorization

David Loewenstern; Sheng Ma; Abdi Salahshour

Motivated by the real-world application of categorizing system log messages into defined situation categories, this paper describes an interactive text categorization method, PICCIL, that leverages supervised machine learning to reduce the burden of assigning categories to documents in large finite data sets but, by coupling human expertise to the machine learning, does so without sacrificing accuracy. PICCIL uses keywords and keyword rules both to preclassify documents and to assist in the manual process of grouping and reviewing documents. The reviewed documents, in turn, are used to refine the keyword rules iteratively to improve subsequent grouping and document review. We apply PICCIL to the problem of assigning semantic situation labels to the entries of a catalog of log events to support on-line labeling of log events


international conference on service oriented computing | 2012

A learning method for improving quality of service infrastructure management in new technical support groups

David Loewenstern; Florian Pinel; Larisa Shwartz; Maira Athanazio de Cerqueira Gatti; Ricardo Herrmann

Service infrastructure management requires the matching of tasks to technicians with a variety of expert knowledge in different areas. Most Service Delivery organizations do not have a consistent view of the evolution of the technician skills because in a dynamic environment the creation and maintenance of a skill model is a difficult task, especially in light of privacy regulations, changing service catalogs and worker turnover. In addition, as services expand, new technical support groups for the same type of services are created and also new technicians may be added, either into a new group or into existing groups. To tackle this problem we evolve a method for ranking technicians on their expected performance according to their suitability for receiving the assignment of a service request. This method makes use of similarities between the technicians and previous tasks performed by them. We propose a strategy for incorporating new technicians and delivery team reorganizations into the method and we present experimental results demonstrating the efficacy of the strategy. Applying this strategy to new teams yields on average acceptable accuracy within 4 hours, though with a wide variation across teams for the first 12 hours. Accuracy and its variability approach the quality of accuracy on older teams over 24 hours.


Ibm Journal of Research and Development | 2009

Toward transforming business continuity services

Christopher Ward; S. Agassi; Kamal Bhattacharya; O. Biran; R. Cocchiara; Michael Factor; C. T. Hayashi; T. Hochberg; B. Kearney; Jim Laredo; David Loewenstern; A. E. Rodecap; J. K. Skoog; Larisa Shwartz; M. Thompson; R. Thompson; Yaron Wolfsthal

Typical businesses have limited expertise in handling disasters; thus, it makes business sense to use external business continuity services. Existing practice is for the business to determine its critical business processes, perform a risk assessment, mitigate as many of these risks as possible, define a continuity plan, and then periodically test this plan at a local IT (information technology) or work-area recovery site. This practice, however, typically imposes limitations as discussed in this paper. We offer enhancements to mitigate these limitations, including the close interlock between the customer and service providers processes, a shared representation of a customers environment, and a recovery infrastructure integrating both automation and virtualization. We capture the customers IT inventory as a recovery configuration and analyze the requirements and available recovery resources to optimize the test schedule of the recovery center and automatically map recovery requirements to resources. We orchestrate recovery deployment through automation. Based on the recovery configuration, we can also determine to what degree the customer can recover within a virtual environment and automatically map the customer to this environment, thereby accruing the benefits of virtualization. These capabilities enhance the clients recovery experience and provide increased flexibility and resource utilization in the recovery data centers.

Researchain Logo
Decentralizing Knowledge