Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faye A. Briggs is active.

Publication


Featured researches published by Faye A. Briggs.


international symposium on computer architecture | 1986

Memory access buffering in multiprocessors

Michel Dubois; Christoph Scheurich; Faye A. Briggs

In highly-pipelined machines, instructions and data are prefetched and buffered in both the processor and the cache. This is done to reduce the average memory access latency and to take advantage of memory interleaving. Lock-up free caches are designed to avoid processor blocking on a cache miss. Write buffers are often included in a pipelined machine to avoid processor waiting on writes. In a shared memory multiprocessor, there are more advantages in buffering memory requests, since each memory access has to traverse the memory- processor interconnection and has to compete with memory requests issued by different processors. Buffering, however, can cause logical problems in multiprocessors. These problems are aggravated if each processor has a private memory in which shared writable data may be present, such as in a cache-based system or in a system with a distributed global memory. In this paper, we analyze the benefits and problems associated with the buffering of memory requests in shared memory multiprocessors. We show that the logical problem of buffering is directly related to the problem of synchronization. A simple model is presented to evaluate the performance improvement resulting from buffering.


IEEE Computer | 1988

Synchronization, coherence, and event ordering in multiprocessors

Michel Dubois; Christoph Scheurich; Faye A. Briggs

The problems addressed apply to both throughput-oriented and speedup-oriented multiprocessor systems, either at the user level or the operating-system level. basic definitions are provided. Communication and synchronization are briefly explained, and hardware-level and software-level synchronization mechanisms are discussed. The cache coherence problem is examined, and solutions are described. Strong and weak ordering of events is considered. The user interface is discussed.<<ETX>>


IEEE Transactions on Software Engineering | 1982

Performance of Synchronized Iterative Processes in Multiprocessor Systems

Michel Dubois; Faye A. Briggs

A general methodology for studying the degree of matching between an architecture and an algorithm is introduced and applied to the case of synchronized iterative algorithms in MIMD machines.


measurement and modeling of computer systems | 1981

Performance of cache-based multiprocessors

Faye A. Briggs; Michel Dubois

A possible design alternative to improve the performance of a multiprocessor system is to insert a private cache between each processor and the shared memory. The caches act as high-speed buffers, reducing the memory access time, and affect the delays caused by memory conflicts. In this paper, we study the performance of a multiprocessor system with caches. The shared memory is pipelined and interleaved to improve the block transfer rate, and assumes an L-M organization, previously studied under random word access. An approximate model is developed to estimate the processor utilization and the speedup improvement provided by the caches. These two parameters are essential to a cost-effective design. An example of a design is treated to illustrate the usefulness of this investigation.


international symposium on computer architecture | 1978

Performance of memory configurations for parallel-pipelined computers

Faye A. Briggs

The performance of various memory configurations for parallel-pipelined computer which execute multiple instruction streams on multiple data streams is investigated. For a parallel-pipelined processor of order (s,p), which consists of p parallel processors each of which is a pipelined processor with s degrees of multiprogramming, there can be up to s p memory requests in each instruction cycle. The memory, which consists of N(&equil;2n) identical memory modules, is organized such that there are l(&equil;2i) lines and m(&equil;2n−i) modules per line, where each module is characterized by the address cycle (address hold time)and memory cycle of a and c time units respectively. The performance which is affected by the memory interference problem is evaluated as a function of the memory configuration, (l, m), the module characteristics (a, c) and the processor order (s, p). Design considerations are discussed and an example given to illustrate possible design options.


national computer conference | 1981

Engineering computer network (ECN): a hardwired network of UNIX computer systems

Kai Hwang; Benjamin W. Wah; Faye A. Briggs

This paper reports the design and operational experiences of a packet-switched local computer network developed at Purdue University. Hardwired communication links (1 mega-baud) are used to interconnect seven UNIX computer systems (two PD11/70, one VAX 11/780, and 4 PDP11/45). Over 20 microprocessors and 210 timesharing CRT terminals are connected to the seven hosts. Instead of using the UUCP protocols for dial-up UNIX networks, several protocol programs are locally developed to make possible the hardwired UNIX networking. The network provides the capabilities of virtual terminal access, remote process execution, file transfer, load balancing, and user programmed network I/O. Only at the lowest protocol level is the DDCMP of DECNET used. The network is expandable and provides appreciable bandwidth with moderate cost and low system overhead. Described in this paper are the network architecture, system components, protocol hierarchy, local UNIX extension, load balancing methods, and performance evaluation of the Purdue ECN network.


measurement and modeling of computer systems | 1979

Effects of buffered memory requests in multiprocessor systems

Faye A. Briggs

A simulation model is developed and used to study the effect of buffering of memory requests on the performance of multiprocessor systems. A multiprocessor system is generalized as a parallel-pipelined processor of order (s,p), which consists of p parallel processors each of which is a pipelined processor with s degrees of multiprogramming, there can be up to s*p memory requests in each instruction cycle. The memory, which consists of N(&equil;2n) identical memory modules, is organized such that there are l(&equil;2i) lines and m(&equil;2n−i) identical memory modules, where each module is characterized by the address cycle (address hold time) and memory cycle of a and c time units respectively. Too large an l is undesirable in a multiprocessor system because of the cost of the processor-memory interconnection network. Hence, we will show how effective buffering can be used to reduce the system cost while effectively maintaining a high level of performance.


IEEE Transactions on Computers | 1991

The run-time efficiency of parallel asynchronous algorithms

Michel Dubois; Faye A. Briggs

The problem studied is similar to the problems found in multiprocessor operating systems. The lockout problem in multiprocessor operating systems is a direct result of multiple processors attempting to process common data structures asynchronously. There are numerous such shared data structures. The models developed are applicable to the study of contention for software and hardware resources in multiprocessor operating systems. The authors introduce an approximate analytical model to evaluate the performance of asynchronous processes found in asynchronous algorithms, including the combined effects of software lockout on critical sections and on job queues, and of shared-memory access conflicts. Because of the strong similarities between the two effects, the same model can be used for both, leading to a uniform and elegant formulation. The models are combined to find the run-time efficiency of asynchronous iterations. >


Archive | 1990

Virtual-Address Caches in Multiprocessors

Michel Cekleov; Michel Dubois; Jin-Chin Wang; Faye A. Briggs

Most general-purpose computers support virtual memory. Generally, the cache associated with each processor is accessed with a physical address obtained after translation of the virtual address in a Translation Lookaside Buffer (TLB). Since today’s uniprocessors are very fast, it becomes increasingly difficult to include the TLB in the cache access path and still avoid wait states in the processor. The alternative is to access the cache with virtual addresses and to access the TLB on misses only. This configuration reduces the average memory access time, but it is a source of consistency problems which must be solved in hardware or software. The basic causes of these problems are the demapping and remapping of virtual addresses, the presence of synonyms, and the maintenance of protection and statistical bits. Some of these problems are addressed in this paper and solutions are compared.


international conference on parallel processing | 1986

Trace-Driven Simulations of Parallel and Distributed Algorithms in Multiprocessors.

Michel Dubois; Faye A. Briggs; Indira Patil; Meera Balakrishnan

Collaboration


Dive into the Faye A. Briggs's collaboration.

Top Co-Authors

Avatar

Michel Dubois

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Christoph Scheurich

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jin-Chin Wang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Kai Hwang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge