Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Brecht is active.

Publication


Featured researches published by Tim Brecht.


international conference on mobile systems, applications, and services | 2007

Vehicular opportunistic communication under the microscope

David Hadaller; Srinivasan Keshav; Tim Brecht; Shubham Agarwal

We consider the problem of providing vehicular Internet access using roadside 802.11 access points. We build on previous work in this area [18, 8, 5, 11] with an extensive experimental analysis of protocol operation at a level of detail not previously explored. We report on data gathered with four capture devices from nearly 50 experimental runs conducted with vehicles on a rural highway. Our three primary contributions are: (1) We experimentally demonstrate that, on average, current protocols only achieve 50% of the overall throughput possible in this scenario. In particular, even with a streamlined connection setup procedure that does not use DHCP, high losses early in a vehicular connection are responsible for the loss of nearly 25% of overall throughput, 15% of the time. (2) We quantify the effects of ten problems caused by the mechanics of existing protocols that are responsible for this throughput loss; and (3) We recommend best practices for using vehicular opportunistic connections. Moreover, we show that overall throughput could be significantly improved if environmental information was made available to the 802.11 MAC and to TCP. The central messagein this paper is that wireless conditions in the vicinity of a roadside access point are predictable, and by exploiting this information, vehicular opportunistic access can be greatly improved.


acm sigops european workshop | 1996

ParaWeb: towards world-wide supercomputing

Tim Brecht; Harjinder S. Sandhu; Meijuan Shan; Jimmy Talbot

In this paper, we describe the design of a system, called ParaWeb, for utilizing Internet or intra-net computing resources in a seamless fashion. The goal is to allow users to execute serial programs on faster compute servers or parallel programs on a variety of possibly heterogeneous hosts. ParaWeb provides extensions to the Java programming environment (through a parallel class library) and the Java runtime system that allow programmers to develop new Java applications with parallelism in mind, or to execute existing Java applications written using Javas multithreading facilities in parallel. Some experimental results from our prototype implementation are used to demonstrate the potential of this approach.


Discrete Applied Mathematics | 1988

Lower bounds on two-terminal network reliability

Tim Brecht; Charles J. Colbourn

Abstract Computing the probability that two nodes in a probabilistic network are connected is a well-known computationally difficult problem. Two strategies are devised for obtaining lower bounds on the connection probability for two terminals. The first improves on the Kruskal-Katona bound by using efficient computations of small pathsets. The second strategy employs efficient algorithms for finding edge-disjoint paths. The resulting bounds are compared; while the edge-disjoint path bounds typically outperform the Kruskal-Katona bounds, they do not always do so. Finally, a method is outlined for developing a uniform bound which combines both strategies.


european conference on computer systems | 2007

Comparing the performance of web server architectures

David Pariag; Tim Brecht; Ashif S. Harji; Peter A. Buhr; Amol Shukla; David R. Cheriton

In this paper, we extensively tune and then compare the performance of web servers based on three different server architectures. The μserver utilizes an event-driven architecture, Knot uses the highly-efficient Capriccio thread library to implement a thread-per-connection model, and WatPipe uses a hybrid of events and threads to implement a pipeline-based server that is similar in spirit to a staged event-driven architecture (SEDA) server like Haboob. We describe modifications made to the Capriccio thread library to use Linuxs zero-copy sendfile interface. We then introduce the SY mmetric Multi-Processor Event Driven (SYMPED) architecture in which relatively minor modifications are made to a single process event-driven (SPED) server (the μserver) to allow it to continue processing requests in the presence of blocking due to disk accesses. Finally, we describe our C++ implementation of WatPipe, which although utilizing a pipeline-based architecture, excludes the dynamic controls over event queues and thread pools used in SEDA. When comparing the performance of these three server architectures on the workload used in our study, we arrive at different conclusions than previous studies. In spite of recent improvements to threading libraries and our further improvements to Capriccio and Knot, both the event-based μserver and pipeline-based Wat-Pipe server provide better throughput (by about 18%). We also observe that when using blocking sockets to send data to clients, the performance obtained with some architectures is quite good and in one case is noticeably better than when using non-blocking sockets.


measurement and modeling of computer systems | 1991

Processor-pool-based scheduling for large-scale NUMA multiprocessors

Songnian Zhou; Tim Brecht

Large-scale Non-Uniform Memory Access (NUMA) multiprocessors are gaining increased attention due to their potential for achieving high performance through the replication of relatively simple components. Because of the complexity of such systems, scheduling algorithms for parallel applications are crucial in realizing the performance potential of these systems. In particular, scheduling methods must consider the scale of the system, with the increased likelihood of creating bottlenecks, along with the NUMA characteristics of the system, and the benefits to be gained by placing threads close to their code and data.We propose a class of scheduling algorithms based on processor pools. A processor pool is a software construct for organizing and managing a large number of processors by dividing them into groups called pools. The parallel threads of a job are run in a single processor pool, unless there are performance advantages for a job to span multiple pools. Several jobs may share one pool. Our simulation experiments show that processor pool-based scheduling may effectively reduce the average job response time. The performance improvements attained by using processor pools increase with the average parallelism of the jobs, the load level of the system, the differentials in memory access costs, and the likelihood of having system bottlenecks. As the system size increasesr, while maintaining the workload composition and intensity, we observed that processor pools can be used to provide significant performance improvements. We therefore conclude that processor pool-based scheduling may be an effective and efficient technique for scalable systems.


workshop challenged networks | 2006

MV-MAX: improving wireless infrastructure access for multi-vehicular communication

David Hadaller; Srinivasan Keshav; Tim Brecht

When a roadside 802.11-based wireless access point is shared by more than one vehicle, the vehicle with the lowest transmission rate reduces the effective transmission rate of all other vehicles. This performance anomaly [9] degrades both individual and overall throughput in such multi-vehicular environments. Observing that every vehicle eventually receives good performance when it is near the access point, we propose MV-MAX (Multi-Vehicular Maximum), a medium access protocol that opportunistically grants wireless access to vehicles with the maximum transmission rate. Mathematical analysis and trace-driven simulations based on real data show that MV-MAX not only improves overall system throughput, compared to 802.11, by a factor of almost 4, but also improves on the previously proposed time-fairness scheme [20, 22, 15] by a factor of more than 2. Moreover, despite being less fair than 802.11, almost every vehicle benefits by using MV-MAX over the more equitable 802.11 access mechanism. Finally, we show that our results are consistent across different data sets.


international conference on network protocols | 2000

Time-lined TCP for the TCP-friendly delivery of streaming media

Biswaroop Mukherjee; Tim Brecht

This paper introduces time-lined TCP (TLTCP). TLTCP is a protocol designed to provide TCP-friendly delivery of time-sensitive data to applications that are loss-tolerant, such as streaming media players. Previous work on unicast delivery, of streaming media over the Internet proposes using UDP and performs congestion control at the user level by regulating the applications sending rate. TLTCP, on the other hand is intended to be implemented at the transport level, and is based on TCP with modifications to support time-lines. Instead of treating all data as a byte stream TLTCP allows the application to associate data with deadlines. TLTCP sends data in a similar fashion to TCP with the deadline for a section of data has elapsed; at which point the now obsolete data is discarded in favor of new data. As a result, TLTCP supports TCP-friendly delivery of streaming media by retaining much of TCPs congestion control functionality. We describe an API for TLTCP that involves augmenting the recvmsg and sendmsg socket calls. We also describe how streaming media applications that use various encoding schemes like MPEG-1 can associate data with deadlines and use TLTCPs API. We use simulations to examine the behavior of TLTCP under a wide range of networks and workloads. We find that it indeed performs time-lined data delivery and under most circumstances the bandwidth is shared equally, among completing TLTCP and TCP flows. Moreover those scenarios under which TLTCP appears to be unfriendly are those under which TCP flows competing only with other TCP flows do not share bandwidth equitably.


Journal of Scheduling | 2003

Non-clair voy ant multiprocessor scheduling of jobs with changing execution characteristics

Jeff Edmonds; Donald Chinn; Tim Brecht; Xiaotie Deng

AbstractThis work theoretically proves that Equi-partition efficiently schedules multiprocessor batch jobs with different execution characteristics. Motwani, Phillips, and Torng (Proc. 4th Annu. ACM/SIAM Symp. on Discrete Algorithms, pp. 422–431, Austin, 1993) show that the mean response time of jobs is within two of optimal for fully parallelizable jobs. We extend this result by considering jobs with multiple phases of arbitrary nondecreasing and sublinear speedup functions. Having no knowledge of the jobs being scheduled (non-clairvoyant) one would not expect it to perform well. However, our main result shows that the mean response time obtained with Equi-partition is no more than


Performance Evaluation | 1996

Using parallel program characteristics in dynamic processor allocation policies

Tim Brecht; Kaushik Guha


international conference on data engineering | 2010

Q-Cop: Avoiding bad query mixes to minimize client timeouts under heavy loads

Sean Tozer; Tim Brecht; Ashraf Aboulnaga

2 + \sqrt 3 \approx 3.73

Collaboration


Dive into the Tim Brecht's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek L. Eager

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Jim Summers

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Abedi

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge