Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Ruth is active.

Publication


Featured researches published by Paul Ruth.


international conference on autonomic computing | 2006

Autonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure

Paul Ruth; Junghwan Rhee; Dongyan Xu; Rick Kennell; Sebastien Goasguen

A shared distributed infrastructure is formed by federating computation resources from multiple domains. Such shared infrastructures are increasing in popularity and are providing massive amounts of aggregated computation resources to large numbers of users. Meanwhile, virtualization technologies, at machine and network levels, are maturing and enabling mutually isolated virtual computation environments for executing arbitrary parallel/distributed applications on top of such a shared physical infrastructure. In this paper; we go one step further by supporting autonomic adaptation of virtual computation environments as active, integrated entities. More specifically, driven by both dynamic availability of infrastructure resources and dynamic application resource demand, a virtual computation environment is able to automatically relocate itself across the infrastructure and scale its share of infrastructural resources. Such autonomic adaptation is transparent to both users of virtual environments and administrators of infrastructures, maintaining the look and feel of a stable, dedicated environment for the user As our proof-of-concept, we present the design, implementation and evaluation of a system called VIOLIN, which is composed of a virtual network of virtual machines capable of live migration across a multi-domain physical infrastructure.


IEEE Computer | 2005

Virtual distributed environments in a shared infrastructure

Paul Ruth; Xuxian Jiang; Dongyan Xu; Sebastien Goasguen

We have developed a middleware system that integrates and extends virtual machine and virtual network technologies to support mutually isolated virtual distributed environments in shared infrastructures like the grid and the PlanetLab overlay infrastructure. Integrating virtual network and on-demand virtual machine creation and customization technologies makes virtual distributed environments a reality. The Violin-based middleware system integrates and enhances such technologies to create virtual distributed environments.


testbeds and research infrastructures for the development of networks and communities | 2012

ExoGENI: A multi-domain infrastructure-as-a-service testbed

Ilia Baldine; Yufeng Xin; Anirban Mandal; Paul Ruth; Chris Heerman; Jeffrey S. Chase

NSF’s GENI program seeks to enable experiments that run within virtual network topologies built-to-order from testbed infrastructure offered by multiple providers (domains). GENI is often viewed as a network testbed integration effort, but behind it is an ambitious vision for multi-domain infrastructure-as-a-service (IaaS). This paper presents ExoGENI, a new GENI testbed that links GENI to two advances in virtual infrastructure services outside of GENI: open cloud computing (OpenStack) and dynamic circuit fabrics. ExoGENI orchestrates a federation of independent cloud sites and circuit providers through their native IaaS interfaces, and links them to other GENI tools and resources.


international conference on trust management | 2004

E-notebook Middleware for Accountability and Reputation Based Trust in Distributed Data Sharing Communities

Paul Ruth; Dongyan Xu; Bharat K. Bhargava; Fred E. Regnier

This paper presents the design of a new middleware which provides support for trust and accountability in distributed data sharing communities. One application is in the context of scientific collaborations. Multiple researchers share individually collected data, who in turn create new data sets by performing transformations on existing shared data sets. In data sharing communities building trust for the data obtained from others is crucial. However, the field of data provenance does not consider malicious or untrustworthy users. By adding accountability to the provenance of each data set, this middlware ensures data integrity insofar as any errors can be identified and corrected. The user is further protected from faulty data by a trust view created from past experiences and second-hand recommendations. A trust view is based on real world social interactions and reflects each user’s own experiences within the community. By identifying the providers of faulty data and removing them from a trust view, the integrity of all data is enhanced


Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing (VTDC '07) | 2007

Taking snapshots of virtual networked environments

Ardalan Kangarlou; Dongyan Xu; Paul Ruth; Patrick Eugster

The capture of global, consistent snapshots of a distributed computing session or system is essential to the systems reliability, manageability, and accountability. Despite the large body of work at the application, library, and operating system levels, we identify a void in the spectrum of distributed snapshot techniques: taking snapshots of the entire distributed runtime environment. Such capability has unique applicability in a number of application scenarios. In this paper, we realize such capability in the context of virtual networked environments. More specifically, by adapting and implementing a distributed snapshot algorithm, we enable the capture of causally consistent snapshots of virtual machines in a virtual networked environment. The snapshot-taking operations do not require any modification to the applications or operating systems running inside the virtual environment. Preliminary evaluation results indicate that our technique incurs acceptable overhead and small disruption to the normal operation of the virtual environment.


virtualization technologies in distributed computing | 2009

Toward dependency-aware live virtual machine migration

Anthony Nocentino; Paul Ruth

The most powerful characteristic of any machine virtualization technology is its ability to adapt to both its underlying infrastructure and the applications it supports. Possibly the most dynamic feature of machine virtualization is the ability to migrate live virtual machines between physical hosts in order to optimize performance or avoid catastrophic events. Unfortunately, the need for live migration increases during times when resources are most scarce. For example, load-balancing is only necessary when load is significantly unbalanced and impending downtime often causes many virtual machines to seek new hosts simultaneously. It is imperative that live migration mechanisms be as fast and efficient as possible in order for virtualization to provide dynamic load balancing, zero-downtime scheduled maintenance, and automatic failover during unscheduled downtime. This paper proposes a novel dependency-aware approach to live virtual machine migration and presents the results of the initial investigation into its ability to reduce migration latency and overhead. The approach uses a tainting mechanism originally developed as an intrusion detection mechanism. Dependency information is used to distinguish processes that create direct or indirect external dependencies during live migration. It is shown that the live migration process can be significantly streamlined by selectively applying a more efficient protocol when migrating processes that do not create external dependencies during migration.


International Journal of High Performance Computing Applications | 2017

PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

Ewa Deelman; Christopher D. Carothers; Anirban Mandal; Brian Tierney; Jeffrey S. Vetter; Ilya Baldin; Claris Castillo; Gideon Juve; Dariusz Król; V. E. Lynch; Benjamin Mayer; Jeremy S. Meredith; Thomas Proffen; Paul Ruth; Rafael Ferreira da Silva

Computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Thus, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation and data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.


cluster computing and the grid | 2003

A transport layer abstraction for peer-to-peer networks

Ronaldo A. Ferreira; Christian Grothoff; Paul Ruth

The initially unrestricted host-to-host communication model provided by the Internet Protocol has deteriorated due to political and technical changes caused by Internet growth. While this is not a problem for most client-server applications, peer-to-peer networks frequently struggle with peers that are only partially reachable. We describe how a peer-to-peer framework can hide diversity and obstacles in the underlying Internet and provide peer-to-peer applications with abstractions that hide transport specific details. We present the details of an implementation of a transport service based on SMTP. Small-scale benchmarks are used to compare transport services over UDP, TCP, and SMTP.


network aware data management | 2013

Evaluating I/O aware network management for scientific workflows on networked clouds

Anirban Mandal; Paul Ruth; Ilya Baldin; Yufeng Xin; Claris Castillo; Mats Rynge; Ewa Deelman

This paper presents a performance evaluation of scientific workflows on networked cloud systems with particular emphasis on evaluating the effect of provisioned network bandwidth on application I/O performance. The experiments were run on ExoGENI, a widely distributed networked infrastructure as a service (NIaaS) testbed. ExoGENI orchestrates a federation of independent cloud sites located around the world along with backbone circuit providers. The evaluation used a representative data-intensive scientific workflow application called Montage. The application was deployed on a virtualized HTCondor environment provisioned dynamically from the ExoGENI networked cloud testbed, and managed by the Pegasus workflow manager. The results of our experiments show the effect of modifying provisioned network bandwidth on disk I/O throughput and workflow execution time. The marginal benefit as perceived by the workflow reduces as the network bandwidth allocation increases to a point where disk I/O saturates. There is little or no benefit from increasing network bandwidth beyond this inflection point. The results also underline the importance of network and I/O performance isolation for predictable application performance, and are applicable for general data-intensive workloads. Insights from this work will also be useful for real-time monitoring, application steering and infrastructure planning for data-intensive workloads on networked cloud platforms.


international parallel and distributed processing symposium | 2016

Toward an End-to-End Framework for Modeling, Monitoring and Anomaly Detection for Scientific Workflows

Anirban Mandal; Paul Ruth; Ilya Baldin; Dariusz Król; Gideon Juve; Rajiv Mayani; Rafael Ferreira da Silva; Ewa Deelman; Jeremy S. Meredith; Jeffrey S. Vetter; V. E. Lynch; Benjamin Mayer; James Wynne; Mark P. Blanco; Christopher D. Carothers; Justin M. LaPre; Brian Tierney

Modern science is often conducted on large scale, distributed, heterogeneous and high-performance computing infrastructures. Increasingly, the scale and complexity of both the applications and the underlying execution platforms have been growing. Scientific workflows have emerged as a flexible representation to declaratively express complex applications with data andcontrol dependences. However, it is extremely challengingfor scientists to execute their science workflows in a reliable and scalable way due to a lack of understanding of expected and realistic behavior of complex scientific workflows on large scale and distributed HPC systems. This is exacerbated by failures and anomalies in largescale systems and applications, which makes detecting, analyzing and acting on anomaly events challenging. In this work, we present a prototype of an end-to-end system for modeling and diagnosing the runtime performance of complex scientific workflows. We interfaced the Pegasus workflow management system, Aspen performance modeling, monitoring and anomaly detection into an integrated framework that not only improves the understanding of complex scientific applications on large scale complex infrastructure, but also detects anomalies and supports adaptivity. We present a black box modeling tool, a comprehensive online monitoring system, and anomaly detection algorithms that employ the models and monitoring data to detect anomaly events. We present an evaluation of the system with a Spallation Neutron Source workflow as a driving use case.

Collaboration


Dive into the Paul Ruth's collaboration.

Top Co-Authors

Avatar

Anirban Mandal

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ilya Baldin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yufeng Xin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claris Castillo

Renaissance Computing Institute

View shared research outputs
Top Co-Authors

Avatar

Ewa Deelman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chris Heerman

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gideon Juve

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge