A. Udaya Shankar
University of Maryland, College Park
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. Udaya Shankar.
ACM Computing Surveys | 1993
A. Udaya Shankar
This is a tutorial introduction to assertional reasoning based on temporal logic. The objective is to provide a working familiarity with the technique. We use a simple system model and a simple proof system, and we keep to a minimum the treatment of issues such as soundness, completeness, compositionality, and abstraction. We model a concurrent system by a state transition system and fairness requirements. We reason about such systems using Hoare logic and a subset of linear-time temporal logic, specifically, invariant assertions and leads-to assertions. We apply the method to several examples.
international conference on network protocols | 1999
Catalin T. Popescu; A. Udaya Shankar
We characterize a TCP implementation by a function, called a profile, that expresses the instantaneous throughput at the source in terms of the instantaneous roundtrip time and instantaneous loss rate for bulk transfers. We empirically obtain profiles of several TCP implementations, accurately enough to distinguish not only the TCP version but also the implementation (BSD, Windows, etc). Profiles have several uses: comparing different TCP implementations, diagnosing a TCP implementation, quantifying TCP-friendly flows, etc. We devise a method that uses profiles to compute the time-evolution of instantaneous performance metrics (throughput, queue size, loss rate, etc.) of TCP networks. Comparison against ns simulations shows the method to be accurate and fast.
technical symposium on computer science education | 2005
Tamer Elsharnouby; A. Udaya Shankar
Networking course projects are usually described by an informal specification and a collection of test cases. Students often misunderstand the specification or oversimplify it to fit just the test cases. Using formal methods eliminates these misunderstandings and allows the students to test their projects thoroughly, but at the expense of learning a new language. SeSF (Services and Systems Framework) is one way to overcome this obstacle. In SeSF, both implementations and services are defined by programs in conventional languages, thereby, eliminating the need to teach the students a new language. SeSF is a markup language that can be integrated with any conventional language. The integration of SeSF and Java is called SeSFJava. SeSFJava provides a technique to mechanically test whether student projects conform to their corresponding specifications, thereby, providing the instructors with a technique for semi-automated grading.We present a four-phase transport protocol project, and describe how SeSFJava is used in specifying, testing and grading the different phases of this project. The use of SeSF significantly (1) increased the percentage of students who completed the projects, (2) reduced their email queries about the specification, and (3) reduced the grading time.
Archive | 2013
A. Udaya Shankar
This chapter presents several message-passing services, or channels for short.A channel allows a user at one location to transmit a message to be received by a user at another location. Because a channel is a service that is spread over different locations, users access the service via multiple systems, one at each location (unlike the previous lock service and bounded-buffer service). Specifically, a channel has a set of addresses, each identifying a location. (MAC addresses, IP addresses, and URLs are examples of addresses.) At each address there is a system within the channel, referred to as an access system, with which users interact. Figure 4.1 illustrates a channel where each access system provides functions tx(k,msg), to transmit message msg to address k, and rx(), to receive a message. Messages are sequences. We require channels to have at least one address. Although a channel with one address doesn’t do anything, it can be convenient for writing programs that use the channel.
Archive | 2013
A. Udaya Shankar
This chapter presents a “multi-copy” version of the single-copy distributed shared memory implementation given in Chap. 18. The multi-copy version maintains, for each page, one write copy and zero or more read-only copies. All the copies have the same value. Each copy is with a different component system. The write copy is accompanied by a so-called copyset, which is the set of addresses of component sytems that have read-only copies. A component system can read from the write copy or a read-only copy. It can write to the write copy only, and that too only when there are no read-only copies anywhere. When a component system attempts to read or write a page that is not locally present, it acquires the write copy of the page, leaving a read copy at the previous location of the write copy. The component system can then read from the page. To write to the page, the component system informs all component systems in the page’s copyset to “invalidate” (i.e., delete) their copies, after which it can update its (write) copy [1].
Archive | 2013
A. Udaya Shankar
This chapter presents a distributed system program that implements the sequentially-consistent distributed shared memory service (in Chap. 17). It employs a straightforward algorithm. Initially, the shared memory pages are arbitrarily distributed among the component systems of the implementation. When a user at a component system attempts to access a page that is not locally present, the page is moved to the component system, after which the access proceeds as usual. We refer to this as a “single-copy” implementation because at any time there is exactly one copy of each page in the distributed system. The component systems use the object-transfer service (Chap. 15) to move pages. (Any implementation of the object-transfer service can be used, e.g., the one in Chap. 16.)
Archive | 2013
A. Udaya Shankar
This chapter presents a distributed program that implements the object-transfer service in Chap. 15 over the addresses of a fifo channel. The component systems of the program employ a distributed “path-reversal” algorithm. Each system maintains for each object a “last” pointer that is either nil or points to another system. When no request is ongoing, the path of last pointers leading out of any system ends at the system holding the object, i.e., the last pointers form a distributed in-tree. To acquire the object, a system j sends a request that gets “forwarded” along the last pointer path leading out of j; at each hop, the system receiving j’s request sets its last pointer to j. Evolutions in which at most one request is ongoing at any time are simple to characterize: each request induces a “path reversal” in the in-tree of last pointers (and its amortized cost is logarithmic in the number of systems). But evolutions in which multiple requests are ongoing at a time are rather complex. We prove that the program implements the service. We also prove a “serializability” property that every evolution is equivalent to an evolution in which at most one request is ongoing at any time. Hence the amortized cost remains logarithmic.
Archive | 2013
A. Udaya Shankar
This chapter presents a distributed program that implements the distributed lock service in Chap. 11 over a fifo channel. It first solves a “distributed request scheduling” problem using Lamport’s timestamp mechanism, and then refines the solution to the distributed lock implementation. The request scheduling problem is as follows: potentially-conflicting requests arrive at different systems; they have to be served so that conflicting requests are not served simultaneously and every request is eventually served. Given a fifo channel connecting the systems, the timestamp mechanism provides a solution to this problem. The solution is easily refined to implement the distributed lock service, because the latter is a special case of the distributed request scheduling problem. The implementation is then further refined to use cyclic timestamps.
Archive | 2013
A. Udaya Shankar
This chapter presents a distributed lock service, that is, a lock whose users may be spread over different locations (e.g., users at different computers of a network). At each location there is an access system through which users access the lock. (In contrast, SimpleLockService (in Chap. 2) is centralized because all users access the service at one system.)
Archive | 2013
A. Udaya Shankar
The component systems of a distributed system can interact by message passing or by shared memory. In the former, their programs have send and receive calls. In the latter their programs read and write shared memory locations. Component systems that interact over a channel typically interact by message passing because that is the natural way to use the channel. Distributed shared memory provides an alternative [6, 8]. Here, a distributed system uses message passing to implement a shared memory accessible to component systems on different computers (perhaps on the same chip). The memory address space is divided into pages, and the pages are allocated among the component systems. When a component system attempts access a page that is not locally present, the distributed shared memory implementation brings the page to the component system.