Kivanc M. Ozonat
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kivanc M. Ozonat.
international middleware conference | 2008
Timothy Wood; Ludmila Cherkasova; Kivanc M. Ozonat; Prashant J. Shenoy
Next Generation Data Centers are transforming labor-inten- sive, hard-coded systems into shared, virtualized, automated, and fully managed adaptive infrastructures. Virtualization technologies promise great opportunities for reducing energy and hardware costs through server consolidation. However, to safely transition an application running natively on real hardware to a virtualized environment, one needs to estimate the additional resource requirements incurred by virtualization overheads. In this work, we design a general approach for estimating the resource requirements of applications when they are transferred to a virtual environment. Our approach has two key components: a set of microbenchmarks to profile the different types of virtualization overhead on a given platform, and a regression-based model that maps the native system usage profile into a virtualized one. This derived model can be used for estimating resource requirements of any application to be virtualized on a given platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. We illustrate the effectiveness of our methodology using Xen virtual machine monitor. Our evaluation shows that our automated model generation procedure effectively characterizes the different virtualization overheads of two diverse hardware platforms and that the models have median prediction error of less than 5% for both the RUBiS and TPC-W benchmarks.
dependable systems and networks | 2008
Ludmila Cherkasova; Kivanc M. Ozonat; Ningfang Mi; Julie Symons; Evgenia Smirni
Automated tools for understanding application behavior and its changes during the application life-cycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects and ultimately can result in company financial loss. We believe that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: i) a regression-based transaction model that reflects a resource consumption model of the application, and ii) an application performance signature that provides a compact model of run-time behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: it is not intrusive and is based on monitoring data that is typically available in enterprise production environments.
ACM Transactions on Computer Systems | 2009
Ludmila Cherkasova; Kivanc M. Ozonat; Ningfang Mi; Julie Symons; Evgenia Smirni
Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.
network operations and management symposium | 2008
Ningfang Mi; Ludmila Cherkasova; Kivanc M. Ozonat; Julie Symons; Evgenia Smirni
Application servers are a core component of a multi-tier architecture that has become the industry standard for building scalable client-server applications. A client communicates with a service deployed as a multi-tier application via request-reply transactions. A typical server reply consists of the Web page dynamically generated by the application server. The application server may issue multiple database calls while preparing the reply. Understanding the cascading effects of the various tasks that are sprung by a single request-reply transaction is a challenging task. Furthermore, significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. We address the problem of efficiently diagnosing essential performance changes in application behavior in order to provide timely feedback to application designers and service providers. In this work, we propose a new approach based on an application signature that enables a quick performance comparison of the new application signature against the old one, while the application continues its execution in the production environment. The application signature is built based on new concepts that are introduced here, namely the transaction latency profiles and transaction signatures. These become instrumental for creating an application signature that accurately reflects important performance characteristics. We show that such an application signature is representative and stable under different workload characteristics. We also show that application signatures are robust as they effectively capture changes in transaction times that result from software updates. Application signatures provide a simple and powerful solution that can further be used for efficient capacity planning, anomaly detection, and provisioning of multi-tier applications in rapidly evolving IT environments.
dependable systems and networks | 2008
Kivanc M. Ozonat
Existing performance management tools for large-scale distributed web services detect anomalies in the performance metric behavior by thresholding on the metrics, which often leads to high false alarm rates, is hard to interpret, and misses multimodal performance behavior. We provide an information-theoretic approach to detecting anomalies in the metric behavior by taking into account the temporal and spatial relationships among the metrics. We model the metrics using a parametric mixture distribution such that each component of the mixture represents a homogeneous segment of temporally contiguous metric behavior. We discover the number, parameters and (temporal) locations the segments (i.e., mixture components) by minimizing an information-theoretic relative entropy between the mixture model and the unknown, underlying distribution of the metrics. We then cluster the discovered segments based on the statistical distances between them to detect any anomalous performance behavior and modes of typical metric behavior.
IEEE Internet Computing | 2013
Hamid Reza Motahari-Nezhad; Susan Spence; Claudio Bartolini; Sven Graupner; Charles Edgar Bess; Marianne Hickey; Parag Joshi; Roberto Mirizzi; Kivanc M. Ozonat; Maher Rahmouni
Casebook embraces social and collaboration technology, analytics, and intelligence to advance the state of the art in case management from systems of record to a system of engagement for knowledge workers. It addresses complex, inefficient work practices, information loss during hand offs between teams, and failure to learn from previous case experience. Intelligent agents help people adapt to changing work practices by tracking process evolution and providing updates and recommendations. Social collaboration surrounding cases integrates communication with information and supports collaborative roadmapping to enable people to work as they collaborate, thus accelerating how quickly and accurately they handle cases.
knowledge discovery and data mining | 2009
Kivanc M. Ozonat; Donald E. Young
There is a growing number of service providers that a consumer can interact with over the web to learn their service terms. The service terms, such as price and time to completion of the service, depend on the consumers particular specifications. For instance, a printing services provider would need from its customers specifications such as the size of paper, type of ink, proofing and perforation. In a few sectors, there exist marketplace sites that provide consumers with specifications forms, which the consumer can fill out to learn the service terms of multiple service providers. Unfortunately, there are only a few such marketplace sites, and they cover a few sectors. At HP Labs, we are working towards building a universal marketplace site, i.e., a marketplace site that covers thousands of sectors and hundreds of providers per sector. One issue in this domain is the automated discovery/retrieval of the specifications for each sector. We address it through extracting and analyzing content from the websites of the service providers listed in business directories. The challenge is that each service provider is often listed under multiple service categories in a business directory, making it infeasible to utilize standard supervised learning techniques. We address this challenge through employing a multilabel statistical clustering approach within an expectation-maximization framework. We implement our solution to retrieve specifications for 3000 sectors, representing more than 300,000 service providers. We discuss our results within the context of the services needed to design a marketing campaign for a small business.
web information systems engineering | 2010
Kivanc M. Ozonat; Sharad Singhal
Despite the widespread adoption of e-commerce and online purchasing by consumers over the last two decades, automated software agents that can negotiate the issues of an e-commerce transaction with consumers still do not exist. A major challenge in designing automated agents lies in the ability to predict the consumers behavior adequately throughout the negotiation process. We employ switching linear dynamical systems (SLDS) within a minimum description length framework to predict the consumers behavior. Based on the SLDS prediction model, we design software agents that negotiate e-commerce transactions with consumers on behalf of online merchants. We evaluate the agents through simulations of typical negotiation behavior models discussed in the negotiation literature and actual buyer behavior from an agent-human negotiation experiment.
web information systems engineering | 2009
Kivanc M. Ozonat
In a few business sectors, there exist marketplace sites that provide the consumer with specifications forms, which the consumer can fill out to learn and compare the service terms of multiple service providers. At HP Labs, we are working towards building a universal marketplace site, i.e., a marketplace site that covers thousands of sectors and hundreds to thousands of providers per sector. We automatically generate the specifications forms for the sectors through a statistical clustering algorithm that utilizes both business directories and web forms from service provider sites.
international conference on service oriented computing | 2009
Sujoy Basu; Sven Graupner; Kivanc M. Ozonat; Sharad Singhal; Donald E. Young
A world-wide community of service providers has a presence on the web, and people seeking services typically go to the web as an initial place to search for them. Service selection is comprised of two steps: finding service candidates using search engines and selecting those which meet desired service properties best. Within the context of Web Services, the service selection problem has been solved through common description frameworks that make use of ontologies and service registries. However, the majority of service providers on the web does not use such frameworks and rather make service descriptions available on their web sites that provide human targeted content. This paper addresses the service selection problem under the assumption that a common service description framework does not exist, and services have to be selected using the more unstructured information available on the web. The approach described in this paper has the following steps. Search engines are employed to find service candidates from dense requirement formulations extracted from user input. Text classification techniques are used to identify services and service properties from web content retrieved from search links. Service candidates are then ranked based on how well they support desired properties. Initial experiments have been conducted to validate the approach.