Theodore Johnson
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Theodore Johnson.
IEEE Transactions on Computers | 1994
Sundeep Prakash; Yann Hang Lee; Theodore Johnson
Nonblocking algorithms for concurrent objects guarantee that an object is always accessible, in contrast to blocking algorithms in which a slow or halted process can render part or all of the data structure inaccessible to other processes. A number of algorithms have been proposed for shared FIFO queues, but nonblocking implementations are few and either limit the concurrency or provide inefficient solutions. The authors present a simple and efficient nonblocking shared FIFO queue algorithm with O(n) system latency, no additional memory requirements, and enqueuing and dequeuing times independent of the size of the queue. They use the compare & swap operation as the basic synchronization primitive. They model their algorithm analytically and with a simulation, and compare its performance with that of a blocking FIFO queue. They find that the nonblocking queue has better performance if processors are occasionally slow, but worse performance if some processors are always slower than others. >
ACM Transactions on Database Systems | 1993
Theodore Johnson; Dennis Sasha
Many concurrent B-tree algorithms have been proposed, but their performances have not yet been analyzed satisfactorily. When transaction processing systems require high levels of concurrency, a restrictive serialization technique on the B-tree index can cause a bottleneck. In this paper we present a framework for constructing analytical performance models of concurrent B-tree algorithms. The models can predict the response time and maximum throughput. We analyze a variety of locking algorithms including naive lock-coupling, optimistic descent, two-phase locking, and the Lehman-Yao algorithm. The analyses are validated by simulations of the algorithms on actual B-trees, as well as by simulations done by other researchers
international conference on management of data | 1993
Theodore Johnson; Padmashree Krishna
Very large database systems require distributed storage, which means that they need distributed search structures for fast and efficient access to the data. In this paper, we present an approach to maintaining distributed data structures that uses lazy updates, which take advantage of the semantics of the search structure operations to allow for scalable and low-overhead replication. Lazy updates can be used to design distributed search structures that support very high levels of concurrency. The alternatives to lazy update algorithms (eager updates) use synchronization to ensure consistency, while lazy update algorithms avoid blocking. Since lazy updates avoid the use of synchronization, they are much easier to implement than eager update algorithms. We demonstrate the application of lazy updates to the dB-tree, which is a distributed B+ tree that replicates its interior nodes for highly parallel access. We develop a correctness theory for lazy updates so that our algorithms can be applied to other distributed search structures.
Information Systems | 1996
Eric N. Hanson; Theodore Johnson
Abstract A new, efficient selection predicate indexing scheme for active database systems is introduced. The selection predicate index proposed uses an interval index on an attribute of a relation or object collection when one or more rule condition clauses are defined on that attribute. The selection predicate index uses a new type of interval index called the interval skip list (IS-list). The IS-list is designed to allow efficient retrieval of all intervals that overlap a point, while allowing dynamic insertion and deletion of intervals. IS-list algorithms are described in detail. The IS-list allows efficient on-line searches, insertions, and deletions, yet is much simpler to implement than other comparable interval index data structures such as the priority search tree and balanced interval binary search tree (IBS-tree). IS-lists require only one third as much code to implement as balanced IBS-trees. The combination of simplicity, performance, and dynamic updateability of the IS-list is unmatched by any other interval index data structure. This makes the IS-list a good interval index structure for implementation in an active database predicate index. ‡
international conference on management of data | 1993
D. Hong; Theodore Johnson; Sharma Chakravarthy
Real-time databases are an important component of embedded real-time systems. In a real-time database context, transactions must not only maintain the consistency constraints of the database but must also satisfy the timing constraints specified for each transaction. Although several approaches have been proposed to integrate real-time scheduling and database concurrency control methods, none of them take into account the dynamic cost of scheduling a transaction. In this paper, we propose a new cost conscious real-time transaction scheduling algorithm which considers dynamic costs associated with a transaction. Our dynamic priority assignment algorithm adapts to changes in the system load without causing excessive numbers of transaction restarts. Our simulations show its superiority over EDF-HP algorithm.
international parallel processing symposium | 1992
Theodore Johnson; Adrian Colbrook
Many concurrent dictionary data structures have been proposed, but usually in the context of shared memory multiprocessors. The paper presents an algorithm for a concurrent distributed B-tree that can be implemented on message passing parallel computers. This distributed B-tree (the dB-tree) replicates the interior nodes in order to improve parallelism and reduce message passing. It is shown how the dB-tree algorithm can be used to build an efficient, highly parallel, data-balanced distributed dictionary, the dE-tree.<<ETX>>
parallel computing | 1996
Theodore Johnson; Timothy A. Davis; Steven M. Hadfield
Task graphs are used for scheduling tasks on parallel processors when the tasks have dependencies. If the execution of the program is known ahead of time, then the tasks can be statically and optimally allocated to the processors. If the tasks and task dependencies arent known ahead of time (the case in some analysts-factor sparse matrix algorithms), then task scheduling must be performed on the fly. We present simple algorithms for a concurrent dynamic-task graph. A processor that needs to execute a new task can query the task graph for a new task, and new tasks can be added to the task graph on the fly. We present several alternatives for allocating tasks for processors and compare their performance.
international parallel processing symposium | 1995
Theodore Johnson
Several fast and low-overhead distributed mutual exclusion algorithms have been proposed. Each of these algorithms required O(log n) messages per critical section entry and O(log n) bits of storage per processor. In this paper, we make a comparative performance study of four distributed mutual exclusion algorithms. Since the algorithms we study are the basis for distributed synchronization, distributed virtual memory, coherent caches, and distributed object systems, our results have implications about the best methods for their implementation. We find that the distributed synchronization algorithm of Chang, Singhal, and Liu (1990) has the overall best performance, though other algorithms are more efficient in special cases. In a system of 350 processors, the CSL algorithm requires only six messages per critical section entry, including the initial request and the token response messages.<<ETX>>
Real-time Systems | 1998
Sharma Chakravarthy; Dong Kweon Hong; Theodore Johnson
Real-time databases are poised to be an important component of complex embedded real-time systems. In real-time databases (as opposed to real-time systems), transactions must satisfy the ACID properties in addition to satisfying the timing constraints specified for each transaction (or task). Although several approaches have been proposed to combine real-time scheduling and database concurrency control methods, to the best of our knowledge, none of them provide a framework for taking into account the dynamic cost associated with aborts, rollbacks, and restarts of transactions. In this paper, we propose a framework in which both static and dynamic costs of transactions can be taken into account. Specifically, we present: i) a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access to be incurred for completing the transaction, ii) a formulation of cost that includes static and dynamic factors for prioritizing transactions, iii) a scheduling algorithm which uses the above two, and iv) simulation of the algorithm for several operating conditions and workload. Our dynamic priority assignment policy (termed the cost conscious approach or CCA) adapts well to fluctuations in the system load without causing excessive numbers of transaction restarts. Our simulations indicate that i) CCA performs better than the EDF-HP algorithm for both soft and firm deadlines, ii) CCA is more fair than EDF-HP, iii) CCA is better than EDF-CR for soft deadline, even though CCA requires and uses less information, and iv) CCA is especially good for disk-resident data.
Information Sciences | 2000
Panos E. Livadas; Theodore Johnson
Program slicing can be used to aid in a variety of software maintenance activities including code understanding, code testing, debugging, and program reengineering. Program slicing (as well as other program analysis functions including forward slicing) can be efficiently performed on an internal program representation called a system dependence graph (SDG). The construction of the SDG depends primarily on the calculation of the transitive dependences which in turn depends in the calculation of the data dependences. In this paper we demonstrate the correctness and the optimality (with respect to the number of iterations required) of our method of calculating the transitive data dependences. We make a worst-case analysis of our algorithm, and find that is faster than the Horwitz, Reps, and Binkley (HRB) algorithm, and comparable to the Reps, Horwitz, Sagiv, and Rosay (RHSR) algorithm. Furthermore, our method requires neither the (explicit) calculation of the GMOD and GREF sets nor the construction of a linkage grammar and the corresponding subordinate characteristic graphs of the linkage grammar’s non-terminals. Unlike both the HRB and RHSR algorithms, our algorithm treats recursive procedures separately, permitting a faster (linear time) solution of non-recursive procedures. Additionally, a beneficial side effect of this method is that it provides us with a new method for performing interprocedural, flow-sensitive data flow analysis.