Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan Chard is active.

Publication


Featured researches published by Ryan Chard.


Journal of Parallel and Distributed Computing | 2015

Reputation systems

Ferry Hendrikx; Kris Bubendorfer; Ryan Chard

In our increasingly interconnected world, the need for reputation is becoming more important as larger numbers of people and services interact online. Reputation is a tool to facilitate trust between entities, as it increases the efficiency and effectiveness of online services and communities. As most entities will not have any direct experience of other entities, they must increasingly come to rely on reputation systems. Such systems allow the prediction who is likely to be trustworthy based on feedback from past transactions. In this paper we introduce a new taxonomy for reputation systems, along with: a reference model for reputation context, a model of reputation systems, a substantial survey, and a comparison of existing reputation research and deployed reputation systems. The concepts for reputation systems are discussed.A survey of existing reputation systems is presented.We construct a new taxonomy for reputation systems.We identify under-represented areas for research.


Concurrency and Computation: Practice and Experience | 2015

The Globus Galaxies platform: delivering science gateways as a service

Ravi K. Madduri; Kyle Chard; Ryan Chard; Lukasz Lacinski; Alex Rodriguez; Dinanath Sulakhe; David Kelly; Utpal J. Dave; Ian T. Foster

The use of public cloud computers to host sophisticated scientific data and software is transforming scientific practice by enabling broad access to capabilities previously available only to the few. The primary obstacle to more widespread use of public clouds to host scientific software (‘cloud‐based science gateways’) has thus far been the considerable gap between the specialized needs of science applications and the capabilities provided by cloud infrastructures. We describe here a domain‐independent, cloud‐based science gateway platform, the Globus Galaxies platform, which overcomes this gap by providing a set of hosted services that directly address the needs of science gateway developers. The design and implementation of this platform leverages our several years of experience with Globus Genomics, a cloud‐based science gateway that has served more than 200 genomics researchers across 30 institutions. Building on that foundation, we have implemented a platform that leverages the popular Galaxy system for application hosting and workflow execution; Globus services for data transfer, user and group management, and authentication; and a cost‐aware elastic provisioning model specialized for public cloud resources. We describe here the capabilities and architecture of this platform, present six scientific domains in which we have successfully applied it, report on user experiences, and analyze the economics of our deployments. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.


international conference on e-science | 2015

Cost-Aware Cloud Provisioning

Ryan Chard; Kyle Chard; Kris Bubendorfer; Lukasz Lacinski; Ravi K. Madduri; Ian T. Foster

Cloud computing is often suggested as a low-cost and scalable model for executing and scaling scientific analyses. However, while the benefits of cloud computing are frequently touted, there are inherent technical challenges associated with scaling execution efficiently and cost-effectively. We describe here a cost-aware elastic provisioner designed to dynamically and cost-effectively provision cloud infrastructure based on the requirements of user-submitted scientific workflows. Our provisioner is used in the Globus Galaxies platform -- a Software-as-a-Service provider of scientific analysis capabilities using commercial cloud infrastructure. Using workloads from production usage of this platform we investigate the performance of our provisioner in terms of cost, spot instance termination rate, and execution time. We demonstrate cost savings across six production gateways of up to 95% and 12% improvement in total execution time when compared to a worst case scenario using a single instance type in a single availability zone.


international conference on cloud computing | 2015

Cost-Aware Elastic Cloud Provisioning for Scientific Workloads

Ryan Chard; Kyle Chard; Kris Bubendorfer; Lukasz Lacinski; Ravi K. Madduri; Ian T. Foster

Cloud computing provides an efficient model to host and scale scientific applications. While cloud-based approaches can reduce costs as users pay only for the resources used, it is often challenging to scale execution both efficiently and cost-effectively. We describe here a cost-aware elastic cloud provisioner designed to elastically provision cloud infrastructure to execute analyses cost-effectively. The provisioner considers real-time spot instance prices across availability zones, leverages application profiles to optimize instance type selection, over-provisions resources to alleviate bottlenecks caused by oversubscribed instance types, and is capable of reverting to on-demand instances when spot prices exceed thresholds. We evaluate the usage of our cost-aware provisioner using four production scientific gateways and show that it can produce cost savings of up to 97.2% when compared to naive provisioning approaches.


Future Generation Computer Systems | 2016

Network health and e-Science in commercial clouds

Ryan Chard; Kris Bubendorfer; Bryan Ng

This paper explores the potential for improving the performance of e-Science applications on commercial clouds through the detailed examination, and characterization, of the underlying cloud network using network tomography. Commercial cloud providers are increasingly offering high performance and GPU-enabled resources that are ideal for many e-Science applications. However, the opacity of the clouds internal network, while a necessity for elasticity, limits the options for e-Science programmers to build efficient and high performance codes. We introduce health indicators, markers, metrics, and score as part of a network health system that provides a model for describing the overall network health of an e-Science application. We then explore the suitability of a range of tomographic techniques to act as health indicators using two testbeds-the second of which spanned one hundred AWS instances. Finally, we evaluate our work using a real-world medical image reconstruction application. We identify and characterize network performance in commercial clouds.An overall health system is constructed using tomographic probes to establish and compare an instances network performance.We deploy the health system over a testbed of 100 AWS instances and explore its ability to scale.We apply the health system to a medical imaging e-Science application and demonstrate performance benefits.


grid computing environments | 2014

PDACS: a portal for data analysis services for cosmological simulations

Ryan Chard; Saba Sehrish; Alex Rodriguez; Ravi K. Madduri; Thomas D. Uram; Marc Paterno; Katrin Heitmann; Shreyas Cholia; Jim Kowalkowski; Salman Habib

Accessing and analyzing data from cosmological simulations is a major challenge due to the prohibitive size of cosmological datasets and the diversity of the associated large-scale analysis tasks. Analysis of the simulated models requires direct access to the datasets, considerable compute infrastructure, and storage capacity for the results. Resource limitations can become serious obstacles to performing research on the most advanced cosmological simulations. The Portal for Data Analysis services for Cosmological Simulations (PDACS) is a web-based workflow service and scientific gateway for cosmology. The PDACS platform provides access to shared repositories for datasets, analytical tools, cosmological workflows, and the infrastructure required to perform a wide variety of analyses. PDACS is a repurposed implementation of the Galaxy workflow engine and supports a rich collection of cosmology-specific datatypes and tools. The platform leverages high-performance computing infrastructure at the National Energy Research Scientific Computing Center (NERSC) and Argonne National Laboratory (ANL), enabling researchers to deploy computationally intensive workflows. In this paper we present PDACS and discuss the process and challenges of developing a research platform for cosmological research.


international conference on distributed computing systems workshops | 2017

Ripple: Home Automation for Research Data Management

Ryan Chard; Kyle Chard; Jason Alt; Dilworth Y. Parkinson; Steven Tuecke; Ian T. Foster

Exploding data volumes and acquisition rates, plus ever more complex research processes, place significant strain on research data management processes. It is increasingly common for data to flow through pipelines comprised of dozens of different management, organization, and analysis steps distributed across multiple institutions and storage systems. To alleviate the resulting complexity, we propose a home automation approach to managing data throughout its lifecycle, in which users specify via high-level rules the actions that should be performed on data at different times and locations. To this end, we have developed Ripple, a responsive storage architecture that allows users to express data management tasks via a rules notation. Ripple monitors storage systems for events, evaluates rules, and uses serverless computing techniques to execute actions in response to these events. We evaluate our solution by applying Ripple to the data lifecycles of two real-world projects, in astronomy and light source science, and show that it can automate many mundane and cumbersome data management processes.


cluster computing and the grid | 2016

An Automated Tool Profiling Service for the Cloud

Ryan Chard; Kyle Chard; Bryan Ng; Kris Bubendorfer; Alex Rodriguez; Ravi K. Madduri; Ian T. Foster

Cloud providers offer a diverse set of instance types with varying resource capacities, designed to meet the needs of a broad range of user requirements. While this flexibility is a major benefit of the cloud computing model, it also creates challenges when selecting the most suitable instance type for a given application. Sub-optimal instance selection can result in poor performance and/or increased cost, with significant impacts when applications are executed repeatedly. Yet selecting an optimal instance type is challenging, as each instance type can be configured differently, application performance is dependent on input data and configuration, and instance types and applications are frequently updated. We present a service that supports automatic profiling of application performance on different instance types to create rich application profiles that can be used for comparison, provisioning, and scheduling. This service can dynamically provision cloud instances, automatically deploy and contextualize applications, transfer input datasets, monitor execution performance, and create a composite profile with fine grained resource usage information. We use real usage data from four production genomics gateways and estimate the use of profiles in autonomic provisioning systems can decrease execution time by up to 15.7% and cost by up to 86.6%.


international conference on e-science | 2012

Experiences in the design and implementation of a Social Cloud for Volunteer Computing

Ryan Chard; Kris Bubendorfer; Kyle Chard

Volunteer computing provides an alternative computing paradigm for establishing the resources required to support large scale scientific computing. The model is particularly well suited for projects that have high popularity and little available computing infrastructure. The premise of volunteer computing platforms is the contribution of computing resources by individuals for little to no gain. It is therefore difficult to attract and retain contributors to projects. The Social Cloud for Volunteer Computing aims to exploit social engineering principles and the ubiquity of social networks to increase the outreach of volunteer computing, by providing an integrated volunteer computing application and creating gamification algorithms based on social principles to encourage contribution. In this paper we present the development of a production SoCVC, detailing the architecture, implementation and performance of the SoCVC Facebook application and show that the approach proposed could have a high impact on volunteer computing projects.


international conference on distributed computing systems | 2017

Software Defined Cyberinfrastructure

Ian T. Foster; Ben Blaiszik; Kyle Chard; Ryan Chard

Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policies by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.

Collaboration


Dive into the Ryan Chard's collaboration.

Top Co-Authors

Avatar

Kyle Chard

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kris Bubendorfer

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ravi K. Madduri

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Alex Rodriguez

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bryan Ng

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas D. Uram

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katrin Heitmann

Argonne National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge