Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Broberg is active.

Publication


Featured researches published by James Broberg.


102教育部獎勵補助 | 2011

Cloud Computing Principles and Paradigms

Rajkumar Buyya; James Broberg; Andrzej M. Goscinski

The primary purpose of this book is to capture the state-of-the-art in Cloud Computing technologies and applications. The book will also aim to identify potential research directions and technologies that will facilitate creation a global market-place of cloud computing services supporting scientific, industrial, business, and consumer applications. We expect the book to serve as a reference for larger audience such as systems architects, practitioners, developers, new researchers and graduate level students. This area of research is relatively recent, and as such has no existing reference book that addresses it.This book will be a timely contribution to a field that is gaining considerable research interest, momentum, and is expected to be of increasing interest to commercial developers. The book is targeted for professional computer science developers and graduate students especially at Masters level. As Cloud Computing is recognized as one of the top five emerging technologies that will have a major impact on the quality of science and society over the next 20 years, its knowledge will help position our readers at the forefront of the field.


Journal of Grid Computing | 2008

Market-oriented Grids and Utility Computing: The State-of-the-art and Future Directions

James Broberg; Srikumar Venugopal; Rajkumar Buyya

Traditional resource management techniques (resource allocation, admission control and scheduling) have been found to be inadequate for many shared Grid and distributed systems, that consist of autonomous and dynamic distributed resources contributed by multiple organisations. They provide no incentive for users to request resources judiciously and appropriately, and do not accurately capture the true value, importance and deadline (the utility) of a user’s job. Furthermore, they provide no compensation for resource providers to contribute their computing resources to shared Grids, as traditional approaches have a user-centric focus on maximising throughput and minimising waiting time rather than maximising a providers own benefit. Consequently, researchers and practitioners have been examining the appropriateness of ‘market-inspired’ resource management techniques to address these limitations. Such techniques aim to smooth out access patterns and reduce the chance of transient overload, by providing a framework for users to be truthful about their resource requirements and job deadlines, and offering incentives for service providers to prioritise urgent, high utility jobs over low utility jobs. We examine the recent innovations in these systems (from 2000–2007), looking at the state-of-the-art in price setting and negotiation, Grid economy management and utility-driven scheduling and resource allocation, and identify the advantages and limitations of these systems. We then look to the future of these systems, examining the emerging ‘Catallaxy’ market paradigm. Finally we consider the future directions that need to be pursued to address the limitations of the current generation of market oriented Grids and Utility Computing systems.


Journal of Parallel and Distributed Computing | 2005

A least flow-time first load sharing approach for distributed server farm

Zahir Tari; James Broberg; Albert Y. Zomaya; Roberto Baldoni

The most critical property exhibited by a heavy-tailed workload distribution (found in many WWW workloads) is that a very small fraction of tasks make up a large fraction of the workload, making the load very difficult to distribute in a distributed system. Load balancing and load sharing are the two predominant load distribution strategies used in such systems. Load sharing generally has better response time than load balancing because the latter can exhibit excessive overheads in selecting servers and partitioning tasks. We therefore further explored the least-loaded-first (LLF) load sharing approach and found two important limitations: (a) LLF does not consider the order of processing, and (b) when it assigns a task, LLF does not consider the processing capacity of servers. The high task size variation that exists in heavy-tailed workloads often causes smaller tasks to be severely delayed by large tasks. This paper proposes a size-based approach, called the least flow-time first (LFF-SIZE), which reduces the delay caused by size variation while maintaining a balanced load in the system. LFF-SIZE takes the relative processing time of a task into account and dynamically assigns a task to the fittest server with a lighter load and higher processing capacity. LFF-SIZE also uses a multi-section queue to separate larger tasks from smaller ones. This arrangement effectively reduces the delay of smaller tasks by larger ones as small tasks are given a higher priority to be processed. The performance results performed on the LFF-SIZE implementation shows a substantial improvement over existing load sharing and static size-based approaches under realistic heavy-tailed workloads.


web information systems engineering | 2009

Maximizing Utility for Content Delivery Clouds

Mukaddim Pathan; James Broberg; Rajkumar Buyya

A content delivery cloud, such as MetaCDN, is an integrated overlay that utilizes cloud computing to provide content delivery services to Internet end-users. While it ensures satisfactory user perceived performance, it also aims to improve the traffic activities in its world-wide distributed network and uplift the usefulness of its replicas. To realize this objective, in this paper, we measure the utility of content delivery via MetaCDN, capturing the system-specific perceived benefits. We use this utility measure to devise a request-redirection policy that ensures high performance content delivery. We also quantify a content providers benefits from using MetaCDN based on its user perceived performance. We conduct a proof-of-concept testbed experiment for MetaCDN to demonstrate the performance of our approach and reveal our observations on the MetaCDN utility and content providers benefits from using MetaCDN.


parallel computing | 2006

Task assignment with work-conserving migration

James Broberg; Zahir Tari; Panlop Zeephongsekul

In this paper we a present a task assignment policy suited to environments (such as high-volume web serving clusters) where local centralised dispatchers are utilised to distribute tasks amongst back-end hosts offering mirrored services, with negligible cost work-conserving migration available between hosts. The TAPTF-WC (Task Assignment based on Prioritising Traffic Flows with Work-Conserving Migration) policy was specifically created to exploit such environments. As such, TAPTF-WC exhibits consistently good performance over a wide range of task distribution scenarios due to its flexible nature, spreading the work over multiple hosts when prudent, and separating short task flows from large task flows via the use of dual queues. Tasks are migrated in a work-conserving manner, reducing the penalty associated with task migration found in many existing policies such as TAGS and TAPTF which restart tasks upon migration. We find that the TAPTF-WC policy is well suited for load distribution under a wide range of different workloads in environments where task sizes are not known a priori and negligible cost work-conserving migration is available.


international conference on principles of distributed systems | 2004

Task assignment based on prioritising traffic flows

James Broberg; Zahir Tari; Panlop Zeephongsekul

We consider the issue of task assignment in a distributed system under heavy-tailed (ie. highly variable) workloads. A new adaptable approach called TAPTF (Task Assignment based on Prioritising Traffic Flows) is proposed, which improves performance under heavy-tailed workloads for certain classes of traffic. TAPTF controls the influx of tasks to each host, enables service differentiation through the use of dual queues and prevents large tasks from unduly delaying small tasks via task migration. Analytical results show that TAPTF performs significantly better than existing approaches, where task sizes are unknown and tasks are non-preemptive (run-to-completion). As system load increases, the scope and the magnitude of the performance gain expands, exhibiting improvements of more than six times in some cases.


international symposium on computers and communications | 2003

Task assignment strategy for overloaded systems

Bin Fu; James Broberg; Zahir Tari

Size-based load distribution approaches are proposed to deal with high variation of task size. One of the most critical problem of these approaches is that they do not consider task deadlines (which if not met may cause task starvation). This paper proposes an extension of our early work on dynamic load balancing [E.L. Hahne et al., June 2002, M. Mirhakkak et al., Aug. 2001, A.S. Tanenbaum, 1996] (called LFF) which takes the relative processing time of task of a task into account and dynamically assigns it to the fittest server with a lighter load and high processing capacity. LFF-PRIORITY dynamically computes the task size priority and task deadline priority and puts them in a priority based multi-section queue. The testing results clearly show that LFF-PRIORITY out performs existing load distribution strategies. More importantly, more than 80% of tasks meet their task deadlines under LFF-PRIORITY strategy.


Archive | 2008

Internetworking of CDNs

Mukaddim Pathan; Rajkumar Buyya; James Broberg

The current deployment approach of the commercial Content Delivery Network (CDN) providers involves placing their Web server clusters in numerous geographical locations worldwide. However, the requirements for providing high quality service through global coverage might be an obstacle for new CDN providers, as well as affecting the commercial viability of existing ones. It is evident from the major consolidation of the CDN market, down to a handful of key players, which has occurred in recent years. Unfortunately, due to the proprietary nature, existing commercial CDN providers do not cooperate in delivering content to the end users in a scalable manner. In addition, content providers typically subscribe to one CDN provider and thus can not use multiple CDNs at the same time. Such a closed, noncooperative model results in disparate CDNs. Enabling coordinated and cooperative content delivery via internetworking among distinct CDNs could allow providers to rapidly “scale-out” to meet both flash crowds [2] and anticipated increases in demand, and remove the need for a given CDN to provision resources. CDN services are often priced out of reach for all but large enterprise customers. Further, commercial CDNs make specific commitments with their customers by signing Service Level Agreements (SLAs), which outline specific penalties if they fail to meet those commitments. Hence, if a particular CDN is unable to provide Quality of Service (QoS) to the end user requests, it may result in SLA violation and end up costing the CDN provider. Economies of scale, in terms of cost effectiveness and performance for both providers and end users, could be achieved by


measurement and modeling of computer systems | 2006

A multicommodity flow model for distributed stream processing

James Broberg; Zhen Liu; Cathy H. Xia; Li Zhang

The rapid development of computer technology has enabled streaming applications to emerge from many areas of the IT industry. A fundamental problem in such stream processing systems is how to best utilize the available resources so that the overall system performance is optimized. In most real time systems, applications are often running in a decentralized, distributed environment. At any given time, no server has the global information about all the servers in the system. It is thus desirable to have distributed solutions capable of adapting to or absorbing local changes in production rates and in network conditions. This paper addresses the fundamental resource allocation question in the distributed stream processing setting. Specifically, we consider a generic model for stream processing systems. Each stream is required to complete a series of tasks to become finished product. Each server is responsible for a subset of the processing tasks for possibly a subset of the streams. All servers have finite computing resources and all communication links have finite available bandwidth. We consider the resource allocation problem so as to maximize the total throughput of the system. Our problem can be described as a generalization of the traditional multicommodity flow problem [1]. Traditional multicommodity flow problem looks for the best way to use of the link capacities to deliver the maximum throughput for the flows in the whole network. In our problem, in addition to the link bandwidth constraints, we also have processing power constraints for each server. Furthermore, we allow flow shrinkage or expansion so as to model operations such


international symposium on computers and communications | 2009

On the performance of multi-level time sharing policy under heavy-tailed workloads

Malith Jayasinghe; Zahir Tari; Panlop Zeephongsekul; James Broberg

Many existing works on multi-level time sharing policies have assumed infinitely small quanta, infinite levels or exponential service time distributions. In this paper, we investigate the performance of a multi-level time sharing policy under heavy-tailed workloads under finite levels when quanta are not infinitely small. Such a policy is consistent with those implemented on modern computer systems and these findings will enable system designers to better understand how various factors (e.g. system load, task size variability and number of levels) affect the overall performance of a given system. First, we obtain the performance metrics for a multi-level time sharing policy with finite number of levels. Second, for the case of 2 and 3 levels (queues), we show that optimal quantum multi-level time sharing policy (MLOQTP) can result in significant performance improvements over other policies under certain traffic and workload conditions. Finally, we investigate the impact of number of levels on the overall performance and propose a simple statistical regression model that can accurately estimate overall performance of a multi-level time sharing system.

Collaboration


Dive into the James Broberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge