Geoffrey Brown
Indiana University Bloomington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Geoffrey Brown.
international symposium on computer architecture | 2000
Paolo Faraboschi; Geoffrey Brown; Joseph A. Fisher; Giuseppe Desoli; Fred Homewood
Lx is a scalable and customizable VLIW processor technology platform designed by Hewlett-Packard and STMicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. For Lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources). In this paper we consider the following issues. When is customization or scaling beneficial? How can one determine the right degree of customization or scaling for a particular application domain? What architectural compromises were made in the Lx project to contain the complexity inherent in a customizable and scalable processor family? The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. We also show how scaling machine resources scales performance, although not uniformly across all applications. Finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution.
ACM Transactions on Programming Languages and Systems | 1993
Yehuda Afek; Geoffrey Brown; Michael Merritt
This paper examines cache consistency conditions for multiprocessor shared memory systems. It states and motivates a weaker condition than is normally implemented. An algorithm is presented that exploits the weaker condition to achieve greater concurrency. The algorithm is shown to satisfy the weak consistency condition. Other properties of the algorithm and possible extensions are discussed.
IEEE Transactions on Computers | 1989
Geoffrey Brown; Mohamed G. Gouda; Chuan-lin Wu
Presents a novel class of mutual exclusion systems, in which processes circulate one token, and each process enters its critical section when it receives the token. Each system in the class is self-stabilizing; i.e. it it starts at any state, possibly one where many tokens exist in the system, it is guaranteed to converge to a good state where exactly one token exists in the system. The systems are better than previous systems in that their state transitions are noninterfering; i.e., if any state transition is enabled at any instant, then it will continue to be enabled until it is executed. This makes the systems easier to implement as delay-insensitive circuits. >
Distributed Computing | 1993
Yehuda Afek; Geoffrey Brown
SummaryA self-stabilizing system has the property that it will converge to a desirable state when started from any state. Most previous researchers assumed that processes in self-stabilizing systems may communicate through shared variables while those that studied meassage passing systems allowed messages with unbounded size. This paper discusses the development of self-stabilizing systems which communicate through message passing, and in which messages may be lost in transit. The systems presented all use fixed size message headers. First, a selfstabilizing version of theAlternating Bit Protocol, a fundamental communication protocol for transmitting data across an unreliable communication medium, is presented. Secondly, the alternating-bit protocol is used to construct a self-stabilizing token ring.
acm symposium on parallel algorithms and architectures | 1989
Yehuda Afek; Geoffrey Brown; Michael Merritt
This paper examines cache consistency conditions (safety conditions) for multiprocessor shared memory systems. It states and motivates a weaker condition than is normally required. An algorithm is presented that exploits the weaker condition to achieve greater concurrency. The paper concludes with a proof that the algorithm satisfies the safety condition.
IEEE Computer | 1993
Alan S. Wenban; John W. O'Leary; Geoffrey Brown
A codesign process using Promela, a concurrent programming language, is under development. A description is given of Promela, the software compiler, and the hardware compiler. As an example, the method is applied to a simple communication system using the alternating bit protocol.<<ETX>>
tools and algorithms for construction and analysis of systems | 2006
Geoffrey Brown; Lee Pike
The Biphase Mark Protocol (BMP) and 8N1 Protocol are physical layer protocols for data transmission. We present a generic model in which timing and error values are parameterized by linear constraints, and then we use this model to verify these protocols. The verifications are carried out using SRIs SAL model checker that combines a satisfiability modulo theories decision procedure with a bounded model checker for highly-automated induction proofs of safety properties over infinite-state systems. Previously, parameterized formal verification of real-time systems required mechanical theorem-proving or specialized real-time model checkers; we describe a compelling case-study demonstrating a simpler and more general approach. The verification reveals a significant error in the parameter ranges for 8N1 given in a published application note [1].
convention of electrical and electronics engineers in israel | 1989
Yehuda Afek; Geoffrey Brown
It was recently shown that no data link protocol with finite sequence-numbers and deterministic finite state transmitter and receiver can tolerate transmitter and/or receiver crashes and recoveries, unless they have a non-volatile memory. Furthermore, it was proved that no self-stabilizing data link protocol in such a model exists. In this note we relax these assumptions and present two simple self-stabilizing data link protocols that use three sequence-numbers, and do not resort to non-volatile memory. The first is a randomized protocol (i.e., relaxing the deterministic transmitter and/or receiver assumption), and the second is a deterministic protocol that requires the transmitter to generate an infinite, a periodic sequence of sequence-numbers (i.e., relaxing the finite-state assumptions).
IEEE Transactions on Very Large Scale Integration Systems | 2008
Sherif Yusuf; Wayne Luk; Morris Sloman; Naranker Dulay; Emil Lupu; Geoffrey Brown
This paper describes a reconfigurable architecture based on field-programmable gate-array (FPGA) technology for monitoring and analyzing network traffic at increasingly high network data rates. Our approach maps the performance-critical tasks of packet classification and flow monitoring into reconfigurable hardware, such that multiple flows can be processed in parallel. We explore the scalability of our system, showing that it can support flows at multi-gigabit rate; this is faster than most software-based solutions where acceptable data rates are typically no more than 100 million bits per second.
acm special interest group on data communication | 1989
Geoffrey Brown; Mohamed G. Gouda; Raymond E. Miller
We describe a new version of the window protocol where message sequence numbers are taken from a finite domain and where both message disorder and loss can be tolerated. Most existing window protocols achieve only one of these two goals. Our protocol is based on a new method of acknowledgement, called block acknowledgement , where each acknowledgement message has two numbers m and n to acknowledge the reception of all data messages with sequence numbers ranging from m to n . Using this method of acknowledgement, the proposed protocol achieves the two goals while maintaining the same data transmission capability of the traditional window protocol.