Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Younge is active.

Publication


Featured researches published by Andrew J. Younge.


New Generation Computing | 2010

Cloud Computing: a Perspective Study

Lizhe Wang; Gregor von Laszewski; Andrew J. Younge; Xi He; M. Kunze; Jie Tao; Cheng Fu

The Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed dynamic computing environments for end-users. In this paper, we study the Cloud computing paradigm from various aspects, such as definitions, distinct features, and enabling technologies. This paper brings an introductional review on the Cloud computing and provides the state-of-the-art of Cloud computing technologies.


international conference on cluster computing | 2009

Power-aware scheduling of virtual machines in DVFS-enabled clusters

Gregor von Laszewski; Lizhe Wang; Andrew J. Younge; Xi He

With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many of the fastest Supercomputers in existence today. However these systems can consume a cities worth of power just to run idle, and require equally massive cooling systems to keep the servers within normal operating temperatures. This produces CO2 emissions and significantly contributes to the growing environmental issue of Global Warming. Green computing, a new trend for high-end computing, attempts to alleviate this problem by delivering both high performance and reduced power consumption, effectively maximizing total system efficiency. This paper focuses on scheduling virtual machines in a compute cluster to reduce power consumption via the technique of Dynamic Voltage Frequency Scaling (DVFS). Specifically, we present the design and implementation of an efficient scheduling algorithm to allocate virtual machines in a DVFS-enabled cluster by dynamically scaling the supplied voltages. The algorithm is studied via simulation and implementation in a multi-core cluster. Test results and performance discussion justify the design and implementation of the scheduling algorithm.


international conference on green computing | 2010

Efficient resource management for Cloud computing environments

Andrew J. Younge; Gregor von Laszewski; Lizhe Wang; Sonia Lopez-Alarcon; Warren Carithers

The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.


international conference on cloud computing | 2011

Analysis of Virtualization Technologies for High Performance Computing Environments

Andrew J. Younge; Robert Henschel; James T. Brown; Gregor von Laszewski; Judy Qiu; Geoffrey C. Fox

As Cloud computing emerges as a dominant paradigm in distributed systems, it is important to fully understand the underlying technologies that make Clouds possible. One technology, and perhaps the most important, is virtualization. Recently virtualization, through the use of hyper visors, has become widely used and well understood by many. However, there are a large spread of different hyper visors, each with their own advantages and disadvantages. This paper provides an in-depth analysis of some of todays commonly accepted virtualization technologies from feature comparison to performance analysis, focusing on the applicability to High Performance Computing environments using Future Grid resources. The results indicate virtualization sometimes introduces slight performance impacts depending on the hyper visor type, however the benefits of such technologies are profound and not all virtualization technologies are equal. From our experience, the KVM hyper visor is the optimal choice for supporting HPC applications within a Cloud infrastructure.


international symposium on pervasive systems, algorithms, and networks | 2009

Towards Thermal Aware Workload Scheduling in a Data Center

Lizhe Wang; Gregor von Laszewski; Jai Dayal; Xi He; Andrew J. Younge; Thomas R. Furlani

High density blade servers are a popular technology for data centers, however, the heat dissipation density of data centers increases exponentially. There is strong evidence to support that high temperatures of such data centers will lead to higher hardware failure rates and thus an increase in maintenance costs. Improperly designed or operated data centers may either suffer from overheated servers and potential system failures, or from overcooled systems, causing extraneous utilities cost. Minimizing the cost of operation (utilities, maintenance, device upgrade and replacement) of data centers is one of the key issues involved with both optimizing computing resources and maximizing business outcome. This paper proposes an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then a thermal aware task scheduling algorithm is presented which aims to reduce power consumption and temperatures in a data center. A simulation study is carried out to evaluate the performance of the algorithm. Simulation results show that our algorithm can significantly reduce temperatures in data centers by introducing endurable decline in performance.


grid computing environments | 2010

Design of the FutureGrid experiment management framework

Gregor von Laszewski; Geoffrey C. Fox; Fugang Wang; Andrew J. Younge; Archit Kulshrestha; Gregory G. Pike; Warren Smith; Jens Vöckler; Renato J. O. Figueiredo; José A. B. Fortes; Kate Keahey

FutureGrid provides novel computing capabilities that enable reproducible experiments while simultaneously supporting dynamic provisioning. This paper describes the FutureGrid experiment management framework to create and execute large scale scientific experiments for researchers around the globe. The experiments executed are performed by the various users of FutureGrid ranging from administrators to software developers and end users. The Experiment management framework will consist of software tools that record user and system actions to generate a reproducible set of tasks and resource configurations. Additionally, the experiment management framework can be used to share not only the experiment setup, but also performance information for the specific instantiation of the experiment. This makes it possible to compare a variety of experiment setups and analyze the impact Grid and Cloud software stacks have.


international conference on cloud computing | 2014

GPU Passthrough Performance: A Comparison of KVM, Xen, VMWare ESXi, and LXC for CUDA and OpenCL Applications

John Paul Walters; Andrew J. Younge; Dong In Kang; Ke Thia Yao; Mikyung Kang; Stephen P. Crago; Geoffrey C. Fox

As more scientific workloads are moved into the cloud, the need for high performance accelerators increases. Accelerators such as GPUs offer improvements in both performance and power efficiency over traditional multi-core processors, however, their use in the cloud has been limited. Today, several common hypervisors support GPU passthrough, but their performance has not been systematically characterized. In this paper we show that low overhead GPU passthrough is achievable across 4 major hypervisors and two processor microarchitectures. We compare the performance of two generations of NVIDIA GPUs within the Xen, VMWare ESXi, and KVM hypervisors, and we also compare the performance to that of Linux Containers (LXC). We show that GPU passthrough to KVM achieves 98 -- 100\% of the base systems performance across two architectures, while Xen and VMWare achieve 96 -- 99\% of the base systems performance, respectively. In addition, we describe several valuable lessons learned through our analysis and share the advantages and disadvantages of each hypervisor/GPU passthrough solution.


ieee international conference on cloud computing technology and science | 2011

FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images

Javier Diaz; Gregor von Laszewski; Fugang Wang; Andrew J. Younge; Geoffrey C. Fox

Future Grid (FG) is an experimental, high-performance test bed that supports HPC, cloud and grid computing experiments for both application and computer scientist. Future Grid includes the use of virtualization technology to allow the support of a wide range of operating systems in order to include a test bed for various cloud computing infrastructure as a service frameworks. Therefore, efficient management of a variety of virtual machine images becomes a key issue. Current cloud frameworks do not provide a way to manage images for different IaaS frameworks. They typically provide their own image repositories, but in general they do not allow us to store the needed metadata to handle other IaaS images. We present a generic catalog and image repository to store images of any type. Our image repository has a convenient interface that distinguishes image types. Therefore, it is not only useful for Future Grid, but also for any application that needs to manage images.


Genome Research | 2016

High mutational rates of large-scale duplication and deletion in Daphnia pulex

Nathan Keith; Abraham E. Tucker; Craig Jackson; Way Sung; José Ignacio Lucas Lledó; Daniel R. Schrider; Sarah Schaack; Jeffry L. Dudycha; Matthew S. Ackerman; Andrew J. Younge; Joseph R. Shaw; Michael Lynch

Knowledge of the genome-wide rate and spectrum of mutations is necessary to understand the origin of disease and the genetic variation driving all evolutionary processes. Here, we provide a genome-wide analysis of the rate and spectrum of mutations obtained in two Daphnia pulex genotypes via separate mutation-accumulation (MA) experiments. Unlike most MA studies that utilize haploid, homozygous, or self-fertilizing lines, D. pulex can be propagated ameiotically while maintaining a naturally heterozygous, diploid genome, allowing the capture of the full spectrum of genomic changes that arise in a heterozygous state. While base-substitution mutation rates are similar to those in other multicellular eukaryotes (about 4 × 10(-9) per site per generation), we find that the rates of large-scale (>100 kb) de novo copy-number variants (CNVs) are significantly elevated relative to those seen in previous MA studies. The heterozygosity maintained in this experiment allowed for estimates of gene-conversion processes. While most of the conversion tract lengths we report are similar to those generated by meiotic processes, we also find larger tract lengths that are indicative of mitotic processes. Comparison of MA lines to natural isolates reveals that a majority of large-scale CNVs in natural populations are removed by purifying selection. The mutations observed here share similarities with disease-causing, complex, large-scale CNVs, thereby demonstrating that MA studies in D. pulex serve as a system for studying the processes leading to such alterations.


international parallel and distributed processing symposium | 2014

Evaluating GPU Passthrough in Xen for High Performance Cloud Computing

Andrew J. Younge; John Paul Walters; Stephen P. Crago; Geoffrey C. Fox

With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their technical computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities clouds provide, as well as many novel computing paradigms available for data-intensive applications. However, there is concern about a performance gap that exists between the performance of IaaS when compared to typical high performance computing (HPC) resources, which could limit the applicability of IaaS for many potential scientific users. Most recently, general-purpose graphics processing units (GPGPUs or GPUs) have become commonplace within high performance computing. We look to bridge the gap between supercomputing and clouds by providing GPU-enabled virtual machines (VMs) and investigating their feasibility for advanced scientific computation. Specifically, the Xen hypervisor is utilized to leverage specialized hardware-assisted I/O virtualization and PCI passthrough in order to provide advanced HPC-centric Nvidia GPUs directly in guest VMs. This methodology is evaluated by measuring the performance of two Nvidia Tesla GPUs within Xen VMs and comparing to bare-metal hardware. Results show PCI passthrough of GPUs within virtual machines is a viable use case for many scientific computing workflows, and could help support high performance cloud infrastructure in the near future.

Collaboration


Dive into the Andrew J. Younge's collaboration.

Top Co-Authors

Avatar

Gregor von Laszewski

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Fugang Wang

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Xi He

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lizhe Wang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

John Paul Walters

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Judy Qiu

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Stephen P. Crago

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Clayton A. Davis

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Emilio Ferrara

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge