Rob Simmonds
University of Calgary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rob Simmonds.
workshop on parallel and distributed simulation | 1999
Zhonge Xiao; Brian W. Unger; Rob Simmonds; John G. Cleary
This paper introduces the Critical Channel Traversing (CCT) algorithm, a new scheduling algorithm for both sequential and parallel discrete event simulation. CCT is a general conservative algorithm that is aimed at the simulation of low-granularity network models on shared-memory multiprocessor computers. An implementation of the CCT algorithm within a kernel called TasKit has demonstrated excellent performance for large ATM network simulations when compared to previous sequential, optimistic and conservative kernels. TasKit has achieved two to three times speedup on a single processor with respect to a splay tree central-event-list based sequential kernel. On a 16 processor (R8000) Silicon Graphics PowerChallenge, TasKit has achieved an event-rate of 1.2 million events per second and a speedup of 26 relative to the sequential kernel for a large ATM network model. Performance is achieved through a multi-level scheduling scheme that supports the scheduling of large grains of computation even with low-granularity events. Performance is also enhanced by supporting good cache behavior and automatic load balancing. The paper describes the algorithm and its motivation, proves its correctness and briefly presents performance results for TasKit.
workshop on parallel and distributed simulation | 2003
Cameron Kiddle; Rob Simmonds; Carey L. Williamson; Brian W. Unger
Packet level discrete event network simulators use an event to model the movement of each packet in the network. This results in accurate models, but requires that many events are executed to simulate large, high bandwidth networks. Fluid-based network simulators abstract the model to consider only changes in rates of traffic flows. This can result in large performance advantages, though information about the individual packets is lost making this approach inappropriate for many simulation and emulation studies. We present a hybrid model in which packet flows and fluid flows coexist and interact. This enables studies to be performed with background traffic modelled using fluid flows and foreground traffic modelled at the packet level. Results presented show up to 20 times speedup using this technique. Accuracy is within 4% for latency and 15% for jitter in many cases.
workshop on parallel and distributed simulation | 2000
Rob Simmonds; Russell J. Bradford; Brian W. Unger
The simulation of wide area computer networks is one area where the benefits of parallel simulation have been clearly demonstrated. We present a description of a system that uses a parallel discrete event simulator to act as a high speed network emulator. With this, real Internet Protocol (IP) traffic generated by application programs running on user workstations can interact with modelled traffic in the emulator thus providing a controlled test environment for distributed applications. The network emulator uses the TasKit conservative parallel discrete event simulation (PDES) kernel. TasKit has been shown to be able to achieve improved parallel performance over existing conservative and optimistic PDES kernels, as well as improved sequential performance over an existing central-event-list based kernel. This paper explains the modifications that have been made to TasKit to enable real-time operation along with the emulator interface that allows the IP network simulation running in the TasKit kernel to interact with real IP clients. Initial emulator performance data is included.
2012 International Green Computing Conference (IGCC) | 2012
David Aikema; Rob Simmonds; Hamidreza Zareipour
Ancillary services are the mechanisms power grids use to address short-term variability in supply and demand as well as the impact of power plant or transmission line failures. Organizations providing such services can earn revenue, or at least reduce their energy costs. This paper explores options for large data centres to reduce costs in this way. Simulation results are presented for a system that models the processing of a workload and the resulting energy use, focusing on the impact of providing specific types of ancillary services. Trace data recording the workload from three supercomputing facilities along with pricing information from a US-based electrical grid are used. Results presented show energy costs reduced by up to 12% with only a small impact on the quality of service provided to users of the data centre. Further reductions in energy costs are shown for data centres willing to cede more control over short-term energy consumption.
Computer Communications | 2003
Rob Simmonds; Brian W. Unger
The Internet protocol traffic and network emulator (IP-TNE) enables real hosts and a real network to interact with a virtual network. It combines a real-time network simulator with a mechanism to capture packets from and write packets to a real network. Packets generated by external hosts interact with synthetic traffic within the virtual network, providing a controlled environment for testing real Internet applications. IP-TNE can also generate simulated traffic internally enabling its use as a sophisticated workload generator for stress testing real Web servers. This paper focuses on two issues related to the scalability of network emulators, such as IP-TNE. The scalability of the virtual network within the emulator and the scalability of the real-time I/O interface used to interoperate with the physical network. For the scalability of the virtual network, parallel discrete event simulation techniques are employed. The scalability of the real-time interfaces requires handling varying amounts of network I/O and mapping packets into the simulator efficiently.
Future Generation Computer Systems | 2007
A. Agarwal; Mohamed Ahmed; A. Berman; B. L. Caron; A. Charbonneau; D. Deatrich; R. Desmarais; A. Dimopoulos; I. Gable; L. S. Groer; R. Haria; Roger Impey; L. Klektau; C. Lindsay; Gabriel Mateescu; Q. Matthews; A. Norton; W. Podaima; Darcy Quesnel; Rob Simmonds; Randall Sobie; B. St Arnaud; C. Usher; D. C. Vanderster; M. Vetterli; R. Walker; M. Yuen
The present paper discusses the design and application of GridX1, a computational grid project which uses shared resources at several Canadian research institutions. The infrastructure of GridX1 is built using off-the-shelf Globus Toolkit 2 middleware, a MyProxy credential server, and a resource broker based on Condor-G to manage the distributed computing environment. The broker-based job scheduling and management functionality are exposed as a Globus GRAM job service. Resource brokering is based on the Condor matchmaking mechanism, whereby job and resource attributes are expressed as ClassAds, with the attributes Requirements and Rank being used to define respectively the constraints and preferences that the matched entity must meet. Various strategies for ranking resources are presented, including an Estimated-Waiting-Time (EWT) algorithm, a throttled load balancing strategy, and a novel external ranking strategy based on data location. One of the unique features is a mechanism which transparently presents the GridX1 resources as a single compute element to the LHC Computing Grid (LCG), based at the CERN Laboratory in Geneva. This interface was used during the ATLAS data challenge 2 to federate the Canadian resources into the LCG without the overhead of maintaining separate LCG sites. Further, the BaBar particle physics simulation has been adapted to execute on GridX1 and resulted in a simplified management of the production. The usage of the throttled EWT and load balancing strategies combined with external data ranking was found to be very effective in improving efficiency and reducing the job failure rate.
enterprise distributed object computing | 2008
Roger Curry; Cameron Kiddle; Nayden Markatchev; Rob Simmonds; Tingxi Tan; Martin F. Arlitt; Bruce Walker
ldquoWeb 2.0rdquo and ldquocloud computingrdquo are revolutionizing the way IT infrastructure is accessed and managed. Web 2.0 technologies such as blogs, wikis and social networking platforms provide Internet users with easier mechanisms to produce Web content and to interact with each other. Cloud computing technologies are aimed at running applications as services over the Internet on a scalable infrastructure. In this paper we explore the advantages of using Web 2.0 and cloud computing technologies in an enterprise setting to provide employees with a comprehensive and transparent environment for utilizing applications. To demonstrate the effectiveness of this approach we have developed an environment that uses a social networking platform to provide access to a legacy application. The application is hosted on an internal cloud computing infrastructure that adapts dynamically to user demands. Initial feedback suggests this approach provides an improved user experience while simplifying management and increasing effective utilization of the underlying IT resources.
modeling analysis and simulation on computer and telecommunication systems | 2000
Russell J. Bradford; Rob Simmonds; Brian W. Unger
Testing distributed applications over the Internet is fraught with problems: due to the inability to control a wide area network consistent, reproducible performance experiments are not possible. A system is described that uses a parallel discrete event simulator that can act as a real-time network emulator. Real Internet Protocol (IP) traffic generated by application programs running on user workstations can interact with modelled traffic in the emulator; thus providing a controlled test environment for distributed applications. Parallel execution enables the emulator to simulate large virtual networks and to model traffic interactions that could not be done in real-time sequentially. This paper gives an overview of the emulator and explores the various external data routing methods that the emulator supports. These routing methods allow the emulator to be operated in shared environments with certain constraints, as well as in dedicated test environments. Preliminary performance results are included.
international conference on e-science | 2009
Nayden Markatchev; Roger Curry; Cameron Kiddle; Andrey Mirtchovski; Rob Simmonds; Tingxi Tan
Accessing, running and sharing applications and data presents researchers with many challenges. Cloud computing and social networking technologies have the potential to simplify or eliminate many of these challenges. Cloud computing technologies can provide scientists with transparent and on-demand access to applications served over the Internet in a dynamic and scalable manner. Social networking technologies provide a means for easily sharing applications and data. In this paper we present an on-line/on-demand interactive application service. The service is built on a cloud computing infrastructure that dynamically provisions virtualized application servers based on user demand. An open source social networking platform is leveraged to establish a portal front end that enables applications and results to be easily shared between researchers. Furthermore, the service works with existing/legacy applications without requiring any modifications.
Performance Evaluation | 2002
Carey L. Williamson; Rob Simmonds; Martin F. Arlitt
This paper describes the use of a parallel discrete-event network emulator called the Internet Protocol Traffic and Network Emulator (IP-TNE) for Web server benchmarking. The experiments in this paper demonstrate the feasibility of high-performance wide area network (WAN) emulation using parallel discrete-event simulation (PDES) techniques on a single shared-memory multiprocessor. Our experiments with an Apache Web server achieve up to 8000 HTTP/1.1 transactions/s for static document retrieval across emulated WAN topologies with up to 4096 concurrent Web/TCP clients. The results show that WAN characteristics, including round-trip delays, packet losses, and bandwidth asymmetry, all have significant impacts on Web server performance, as do client protocol behaviors. WAN emulation using the IP-TNE enables stress testing and benchmarking of Web servers in ways that may not be possible in simple local area network (LAN) test scenarios.