Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marty Humphrey is active.

Publication


Featured researches published by Marty Humphrey.


ieee international conference on high performance computing data and analytics | 2011

Auto-scaling to minimize cost and meet application deadlines in cloud workflows

Ming Mao; Marty Humphrey

A goal in cloud computing is to allocate (and thus pay for) only those cloud resources that are truly needed. To date, cloud practitioners have pursued schedule-based (e.g., time-of-day) and rule-based mechanisms to attempt to automate this matching between computing requirements and computing resources. However, most of these “auto-scaling” mechanisms only support simple resource utilization indicators and do not specifically consider both user performance requirements and budget concerns. In this paper, we present an approach whereby the basic computing elements are virtual machines (VMs) of various sizes/costs, jobs are specified as workflows, users specify performance requirements by assigning (soft) deadlines to jobs, and the goal is to ensure all jobs are finished within their deadlines at minimum financial cost. We accomplish our goal by dynamically allocating/deallocating VMs and scheduling tasks on the most cost-efficient instances. We evaluate our approach in four representative cloud workload patterns and show cost savings from 9.8% to 40.4% compared to other approaches.


international conference on cloud computing | 2012

A Performance Study on the VM Startup Time in the Cloud

Ming Mao; Marty Humphrey

One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.


grid computing | 2010

Cloud auto-scaling with deadline and budget constraints

Ming Mao; Jie Li; Marty Humphrey

Clouds have become an attractive computing platform which offers on-demand computing power and storage capacity. Its dynamic scalability enables users to quickly scale up and scale down underlying infrastructure in response to business volume, performance desire and other dynamic behaviors. However, challenges arise when considering computing instance non-deterministic acquisition time, multiple VM instance types, unique cloud billing models and user budget constraints. Planning enough computing resources for user desired performance with less cost, which can also automatically adapt to workload changes, is not a trivial problem. In this paper, we present a cloud auto-scaling mechanism to automatically scale computing instances based on workload information and performance desire. Our mechanism schedules VM instance startup and shut-down activities. It enables cloud applications to finish submitted jobs within the deadline by controlling underlying instance numbers and reduces user cost by choosing appropriate instance types. We have implemented our mechanism in Windows Azure platform, and evaluated it using both simulations and a real scientific cloud application. Results show that our cloud auto-scaling mechanism can meet user specified performance goal with less cost.


IEEE Computer | 1999

Wide area computing: resource sharing on a large scale

Andrew S. Grimshaw; Adam J. Ferrari; Frederick C. Knabe; Marty Humphrey

Consider almost any computing resource today-whether hardware, software, or data-and it will invariably be networked. Computing over wide area networks has been largely ad hoc, but as needs increase, piecemeal solutions no longer make sense. The authors set out to design and build a wide-area operating system that would allow multiple organizations with diverse platforms to share and combine their resources. This system, Legion, is a network-level operating system designed from scratch to target wide-area computing demands.


high performance distributed computing | 2010

Early observations on the performance of Windows Azure

Zach Hill; Jie Li; Ming Mao; Arkaitz Ruiz-Alvarez; Marty Humphrey

A significant open issue in cloud computing is performance. Few, if any, cloud providers or technologies offer quantitative performance guarantees. Regardless of the potential advantages of the cloud in comparison to enterprise-deployed applications, cloud infrastructures may ultimately fail if deployed applications cannot predictably meet behavioral requirements. In this paper, we present the results of comprehensive performance experiments we conducted on Windows Azure from October 2009 to February 2010. In general, we have observed good performance of the Windows Azure mechanisms, although the average 10 minute VM startup time and the worst-case 2x slowdown for SQL Azure in certain situations -relative to commodity hardware within the enterprise- must be accounted for in application design. In addition to a detailed performance evaluation of Windows Azure, we provide recommendations for potential users of Windows Azure based on these early observations. Although the discussion and analysis is tailored to scientific applications, the results are broadly applicable to the range of existing and future applications running in Windows Azure.


Lawrence Berkeley National Laboratory | 2005

Security for Grids

Marty Humphrey; Mary R. Thompson; Keith Jackson

Securing a Grid environment presents a distinctive set of challenges. This work groups the activities that need to be secured into four categories: naming and authentication; secure communication; trust, policy, and authorization; and enforcement of access control. It examines the current state of the art in securing these activities and introduces new technologies that promise to meet the security requirements of Grids more completely.


compilers, architecture, and synthesis for embedded systems | 2002

Control-theoretic dynamic frequency and voltage scaling for multimedia workloads

Zhijian Lu; Jason J. Hein; Marty Humphrey; Mircea R. Stan; John Lach; Kevin Skadron

This paper describes a formal feedback-control algorithm for dynamic voltage/frequency scaling (DVS) in a portable multimedia system to save power while maintaining a desired playback rate. Our algorithm is similar in complexity to the previously-proposed change-point detection algorithm [19] but does a better job of maintaining stable throughput and is not dependent on the assumption of an exponential distribution of the frame decoding rate. For approximately the same energy savings as reported by [19], our controller is able to keep the average frame delay within 10% of the target more than 90% of the time, whereas the change-point detection algorithm kept the average frame delay with 10% of the target only 70% or less of the time executing the same workload.


high performance distributed computing | 2005

State and events for Web services: a comparison of five WS-resource framework and WS-notification implementations

Marty Humphrey; Glenn S. Wasson; K. Jackson; J. Boverhof; M. Rodriguez; Jarek Gawor; J. Bester; S. Lang; Ian T. Foster; Sam Meder; S. Pickles; M. Mc Keown

The Web services resource framework defines conventions for managing state in distributed systems based on Web services, and WS-notification defines topic-based publish/subscribe mechanisms. We analyze five independent and quite different implementations of these specifications from the perspectives of architecture, functionality, standards compliance, performance, and interoperability. We identify both commonalities among the different systems (e.g., similar dispatching and SOAP processing mechanisms) and differences (e.g., security, programming models, and performance). Our results provide insights into effective implementation approaches. Our results may also provide application developers, system architects, and deployers with guidance in identifying the right implementation for their requirements and in determining how best to use that implementation and what to expect with regard to performance and interoperability.


high performance distributed computing | 2001

Security implications of typical Grid Computing usage scenarios

Marty Humphrey; Mary R. Thompson

Grid Computing consists of a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios is presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions is to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios is to increase the awareness of security issues in Grid Computing.


conference on high performance computing (supercomputing) | 2001

LegionFS: A Secure and Scalable File System Supporting Cross-Domain High-Performance Applications

Brian S. White; Michael Pittman Walker; Marty Humphrey; Andrew S. Grimshaw

Realizing that current file systems can not cope with the diverse requirements of wide-area collaborations, researchers have developed data access facilities to meet their needs. Recent work has focused on comprehensive data access architectures. In order to fulfill the evolving requirements in this environment, we suggest a more fully-integrated architecture built upon the fundamental tenets of naming, security, scalability, extensibility, and adaptability. These form the underpinning of the Legion File System (LegionFS). This paper motivates the need for these requirements and presents benchmarks that highlight the scalability of LegionFS. LegionFS aggregate throughput follows the linear growth of the network, yielding an aggregate read bandwidth of 193.8 MB/s on a 100 Mbps Ethernet backplane with 50 simultaneous readers. The serverless architecture of LegionFS is shown to benefit important scientific applications, such as those accessing the Protein Data Bank, within both local- and wide-area environments.

Collaboration


Dive into the Marty Humphrey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

In Kee Kim

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deborah A. Agarwal

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Jackson

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge