Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Graupner is active.

Publication


Featured researches published by Sven Graupner.


ASME 2003 International Mechanical Engineering Congress and Exposition | 2003

Energy Aware Grid: Global Workload Placement Based on Energy Efficiency

Chandrakant D. Patel; Ratnesh Sharma; Cullen E. Bash; Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At


international conference on engineering of complex computer systems | 2001

A framework for analyzing and organizing complex systems

Sven Graupner; Vadim E. Kotov; Holger Trinks

100/MWh, the cooling alone would cost


integrated network management | 2005

Quartermaster - a resource utility system

Sharad Singhal; Martin F. Arlitt; Dirk Beyer; Sven Graupner; Vijay Machiraju; Jim Pruyne; Jerry Rolia; Akhil Sahai; Cipriano A. Santos; Julie Ward; Xiaoyun Zhu

4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.Copyright


international conference on distributed computing systems | 2002

Resource-sharing and service deployment in virtual data centers

Sven Graupner; Vadim E. Kotov; Holger Trinks

The paper discusses a framework and technologies enabling the quantitative analysis, organization and optimization of large-scale, globally distributed enterprise and e-services systems. The goal is to organize complex systems in such ways that traffic can be better explained, predicted and controlled at application and service layers rather than at the network layers. Our work approaches higher system perspectives where architectural decisions are made about the overall organization of work and task flows, the global placement of data and applications, etc. Those decisions are significant for the traffic induced in the system later on. Little support is provided today for designing and evaluating large-scale systems from these perspectives, primarily caused by the difficulties in developing realistic computerized models reflecting the dynamic behavior of services and applications. We reduce complex environments to uniform representations of resource demands and capacities and use them to improve the overall system organization. Case studies with earlier versions of our approach have been carried out with two corporate partners, which are discussed at the end.


IEEE Computer | 2012

The Future of Enterprise IT in the Cloud

Jamie Erbes; Hamid R. Motahari Nezhad; Sven Graupner

Utility computing is envisioned as the future of enterprise IT environments. Achieving utility computing is a daunting task, because enterprise users have diverse and complex needs. In this paper we describe quartermaster, an integrated set of tools that addresses some of these needs. Quartermaster supports the entire lifecycle of computing tasks - including design, deployment, operation, and decommissioning of each task. Although individual components of this lifecycle have been addressed in earlier work, quartermaster integrates them in a unified framework using model-based automation. All tools within quartermaster are integrated using models based on the common information model (CIM), an industry-standard model from the distributed management task force (DMTF). The paper discusses the quartermaster implementation, and describes two case studies using quartermaster.


IEEE MultiMedia | 2002

Web E-speak: facilitating Web-based e-services

Wooyoung Kim; Sven Graupner; Akhil Sahai; Dmitry Lenkov; Chetan Chudasama; Samuel Whedbee; Yuhua Luo; Bharati Desai; Howard Mullings; Pui Wong

The expectation of a global presence of services leads to the need for large numbers of service instances allocated in a multitude of regional data centers in order to provide sufficient service capacity close to where the demand occurs. Scale of service instances is anticipated growing /spl Gt/10/sup 4/ raising new challenges for control and management. Pragmatically, it must become much easier to deploy service instances in data center, allocating resources, sharing them, installing and configuring data and software needed for service instances and integrating them into a singular service that appears to a consumer. Adjusting numbers and locations of service instances is seen as a basic control mechanism in order to follow regional or temporal fluctuations in demands. The paper proposes a new concept of virtualizing whole data center environments and quickly deploying massive amounts of service instances. A virtualization layer takes care of resource allocation from different data center locations and all specifics when service instances are allocated in a particular data center. Virtualized data centers provide a consistent operating environment spanning multiple physical data center locations for the whole family of service instances. And vice versa, physical data centers host several execution environments for different services. After a brief discussion of challenges coming with the scale of service instances we anticipate, the paper overviews virtual data centers and discusses one aspect in more detail how massive amounts of service instances can be deployed using a recursive approach.


IEEE Internet Computing | 2003

Service-centric globally distributed computing

Sven Graupner; Vadim E. Kotov; Artur Andrzejak; Holger Trinks

The widespread availability and adoption of cloud services creates new challenges for enterprise IT and prompts the need for new methodologies, tools, and skill sets for managing a hybrid portfolio of these services and traditional IT systems.


international conference on service oriented computing | 2012

Adaptive case management in the social enterprise

Hamid Reza Motahari-Nezhad; Claudio Bartolini; Sven Graupner; Susan Spence

E-Speak, Hewlett-Packards e-services initiative, is an open, distributed platform that lets e-services dynamically and securely advertise, discover, and interoperate with each other. Web E-Speak, the gateway to E-Speak on the Web, facilitates engineering Web-based e-services by taking into account their requirements for dynamic ad-hoc discovery, secure interaction, and global accessibility.


enterprise distributed object computing | 2009

Making processes from best practice frameworks actionable

Sven Graupner; Hamid Reza Motahari-Nezhad; Sharad Singhal; Sujoy Basu

An automated service demand-supply control system can improve a large-scale grid infrastructure comprising a federation of distributed utility data centers.


cluster computing and the grid | 2004

Adaptive control system for server groups in enterprise data centers

Sven Graupner; Jean-Marc Chevrot; Nigel Cook; Ramesh Kavanappillil; Tilo Nitzsche

In this paper, we introduce SoCaM, a framework for supporting case management in social networking environments. SoCaM makes case entities (cases, processes, artifacts, etc.) first class, active elements in the social network and connects them to people. It enables social, collaborative and flexible definition, adaptation and enactment of case processes among people. It also offers mechanisms for capturing and formalizing feedback, from interactions in the social network, into the case, process and artifact definitions. We report on the implementation and a case management scenario for sales processes in the enterprise.

Collaboration


Dive into the Sven Graupner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge