Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastien Goasguen is active.

Publication


Featured researches published by Sebastien Goasguen.


Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing (VTDC '07) | 2007

The efficacy of live virtual machine migrations over the internet

Eric Harney; Sebastien Goasguen; Jim Martin; Michael A. Murphy; Mike Westall

This paper describes a technique to enable live migration of virtual machines over the Internet. The method assumes the network supports Mobile IPv6 and that participating host devices support Xen 3.1. Unlike live migration schemes proposed by other researchers, virtual networks are not required. We describe our work in progress on developing a system that utilizes Mobile IPv6 to enable constant network connectivity through the migration. We identify the sources of delay associated with the live migration and conclude that as long as migrations occur relatively infrequently, live migration over the Internet is practical.


cluster computing and the grid | 2009

Dynamic Provisioning of Virtual Organization Clusters

Michael A. Murphy; Brandon Kagey; Michael Fenn; Sebastien Goasguen

Virtual Organization Clusters are systems comprised of virtual machines that provide dedicated computing clusters for each individual Virtual Organization. The design of these clusters allows individual virtual machines to be independent of the underlying physical hardware, potentially allowing virtual clusters to span multiple grid sites. A major challenge in using Virtual Organization Clusters as a grid computing abstraction arises from the need to schedule and provision physical resources to run the virtual machines.This paper describes a virtual cluster scheduler implementation based on the Condor High Throughput Computing system. By means of real-time monitoring of the Condor job queue, virtual machines that belong to individual Virtual Organizations are provisioned and booted. Jobs belonging to each Virtual Organization are then run on the organization-specific virtual machines, which form a cluster dedicated to the specific organization. Once the queued jobs have executed, the virtual machines are terminated, thereby allowing the physical resources to be re-claimed. Tests of this system were conducted using synthetic workloads, demonstrating that dynamic provisioning of virtual machines preserves system throughput for all but the shortest-running of grid jobs, without undue increase in scheduling latency.


ieee international conference on cloud computing technology and science | 2010

Image Distribution Mechanisms in Large Scale Cloud Providers

Romain Wartel; Tony Cass; Belmiro Moreira; Ewan Roche; Manuel Guijarro; Sebastien Goasguen; U. Schwickerath

This paper presents the various mechanisms for virtual machine image distribution within a large batch farm and between sites that offer cloud computing services. The work is presented within the context of the Large Hadron Collider Computing Grid (LCG), it has two main goals. First it aims at presenting the CERN specific mechanisms that have been put in place to test the pre-staging of virtual machine images within a large cloud infrastructure of several hundred physical hosts. Second it introduces the basis of a policy for trusting and distributing virtual machine images between sites of the LCG. Finally experimental results are shown for the distribution of a 10 GB virtual machine image distributed to over 400 physical nodes using a binary tree and a Bit Torrent algorithm. Results show that images can be pre-staged within 30 minutes.


Journal of Grid Computing | 2010

Autonomic Clouds on the Grid

Michael A. Murphy; Linton Abraham; Michael Fenn; Sebastien Goasguen

Computational clouds constructed on top of existing Grid infrastructure have the capability to provide different entities with customized execution environments and private scheduling overlays. By designing these clouds to be autonomically self-provisioned and adaptable to changing user demands, user-transparent resource flexibility can be achieved without substantially affecting average job sojourn time. In addition, the overlay environment and physical Grid sites represent disjoint administrative and policy domains, permitting cloud systems to be deployed non-disruptively on an existing production Grid. Private overlay clouds administered by, and dedicated to the exclusive use of, individual Virtual Organizations are termed Virtual Organization Clusters. A prototype autonomic cloud adaptation mechanism for Virtual Organization Clusters demonstrates the feasibility of overlay scheduling in dynamically changing environments. Commodity Grid resources are autonomically leased in response to changing private scheduler loads, resulting in the creation of virtual private compute nodes. These nodes join a decentralized private overlay network system called IPOP (IP Over P2P), enabling the scheduling and execution of end user jobs in the private environment. Negligible overhead results from the addition of the overlay, although the use of virtualization technologies at the compute nodes adds modest service time overhead (under 10%) to computationally-bound Grid jobs. By leasing additional Grid resources, a substantial decrease (over 90%) in average job queuing time occurs, offsetting the service time overhead.


acm southeast regional conference | 2009

A study of a KVM-based cluster for grid computing

Michael Fenn; Michael A. Murphy; Sebastien Goasguen

We present a performance study of a virtualized cluster based on the virtualization system KVM. We show benchmark results from the High Performance Computing Challenge (HPCC) application suite including the High Performance Linpack (HPL) benchmark. We also present the mechanism by which this cluster is connected to the Open Science Grid (OSG). Our results show that jobs with low amounts of network communication will only suffer moderate overhead (≈10%) due to virtualization, while MPI applications will suffer from a considerable overhead in the 60% range. The KVM cluster under investigation does prove to be suitable for current High Throughput Computing (HTC) grid usage on OSG where the Condor middleware is used.


grid computing | 2011

A Science Driven Production Cyberinfrastructure--the Open Science Grid

Mine Altunay; P. Avery; K. Blackburn; Brian Bockelman; M. Ernst; Dan Fraser; Robert Quick; Robert Gardner; Sebastien Goasguen; Tanya Levshina; Miron Livny; John McGee; Doug Olson; R. Pordes; Maxim Potekhin; Abhishek Singh Rana; Alain Roy; Chander Sehgal; I. Sfiligoi; Frank Wuerthwein

This article describes the Open Science Grid, a large distributed computational infrastructure in the United States which supports many different high-throughput scientific applications, and partners (federates) with other infrastructures nationally and internationally to form multi-domain integrated distributed systems for science. The Open Science Grid consortium not only provides services and software to an increasingly diverse set of scientific communities, but also fosters a collaborative team of practitioners and researchers who use, support and advance the state of the art in large-scale distributed computing. The scale of the infrastructure can be expressed by the daily throughput of around seven hundred thousand jobs, just under a million hours of computing, a million file transfers, and half a petabyte of data movement. In this paper we introduce and reflect on some of the OSG capabilities, usage and activities.


virtualization technologies in distributed computing | 2012

Elastic IP and security groups implementation using OpenFlow

Greg Stabler; Aaron Rosen; Sebastien Goasguen; Kuang-Ching Wang

This paper presents a reference implementation of an Elastic IP and Security Group service using the OpenFlow protocol. The implementation is the first to present integration of OpenFlow within a virtual machine provisioning engine and an API for enabling such services. In this paper the OpenNebula system is used. The Elastic IP and Security Groups services are similar to the Amazon EC2 services and present a compatible Query API implemented by OpenNebula. The core of the implementation relies on the integration of an OpenFlow controller (NOX) with the EC2 server. Flow rules can be inserted in the OpenFlow controller using the EC2 API. These rules are then used by Open vSwitch bridges on the underlying hypervisor to manage network traffic. The reference implementation presented opens the door for more advanced cloud networking services that leverage principles from software defined networking including virtual private cloud, virtual data center spanning multiple availability zones, as well as seamless migration over wide are networks.


many task computing on grids and supercomputers | 2009

Kestrel: an XMPP-based framework for many task computing applications

Lance Stout; Michael A. Murphy; Sebastien Goasguen

This paper presents a new distributed computing framework for Many Task Computing (MTC) applications, based on the Extensible Messaging and Presence Protocol (XMPP). A lightweight, highly available system, named Kestrel, has been developed to explore XMPP-based techniques for improving MTC system tolerance to faults that result from scaling and intermittent computing agent presence. By leveraging technologies used in large instant messaging systems that scale to millions of clients, this MTC system is designed to scale to millions of agents at various levels of granularity: cores, machines, clusters, and even sensors, which makes it a good fit for MTC. Kestrels architecture is inspired by the distributed design of pilot job frameworks on the grid as well as botnets, with the addition of a commodity instant messaging protocol for communications. Whereas botnet command-and-control systems have frequently used a combination of Internet Relay Chat (IRC), Distributed Hash Table (DHT), and other Peer-to-Peer (P2P) technologies, Kestrel utilizes XMPP for its presence notification capabilities, which allow the system to maintain continuous tracking of machine presence and state in real time. XMPP is also easily extensible with application-specific subprotocols, which can be utilized to transfer machine profile descriptions and job requirements. These sub-protocols can be used to implement distributed matching of jobs to systems, using a mechanism similar to ClassAds in the Condor High Throughput Computing (HTC) system.


cluster computing and the grid | 2012

Towards Ontology-based Data Quality Inference in Large-Scale Sensor Networks

Sam T. Esswein; Sebastien Goasguen; Christopher J. Post; Jason O. Hallstrom; David L. White; Gene Eidson

This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent River®. Our current methods for data quality evaluation are compared with the ontology-based inference methods described in this paper. We present an architecture that incorporates semantic inference into a publish/subscribe messaging middleware, allowing data quality inference to occur on real-time data streams. Our preliminary benchmark results indicate delays of 100ms for basic data quality checks based on an existing semantic web software framework. We demonstrate how these results can be maintained under increasing sensor data traffic rates by allowing inference software agents to work in parallel. These results indicate that data quality inference using the semantic sensor network paradigm is viable solution for data intensive, large-scale sensor networks.


international conference on autonomic computing | 2010

Self-provisioned hybrid clouds

Linton Abraham; Michael A. Murphy; Michael Fenn; Sebastien Goasguen

Virtual Organizations are dynamic entities that consist of individuals and/or institutions established around a set of resource-sharing rules and conditions. The VO may require the use of on-site (local) and off-site (public) compute resources that can be leased or autonomically provisioned, based on workload and site policies. Virtual Organization Clusters provide the necessary computing infrastructure by building upon existing physical grid sites without disrupting the existing infrastructure or requiring any engagement from end users. VOCs also separate the physical and virtual administrative domains and thus encourage more sites to participate in the resource sharing and hosting. The VO can relinquish the compute resources based on job completion or other operational parameters such as cost. This paper expands on previous work with the Virtual Organization Cluster Model by demonstrating its scalability across multiple grid sites with the use of a structured peer-to-peer overlay networking system. A novel approach by which the model is extended to lease-based systems, such as the Amazon Elastic Compute Cloud (EC2), is introduced.

Collaboration


Dive into the Sebastien Goasguen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Ruth

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Alain Roy

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge