Marilynn Livingston
Southern Illinois University Edwardsville
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marilynn Livingston.
hypercube concurrent computers and applications | 1988
Marilynn Livingston; Quentin F. Stout
Given a type of resource such as disk units, extra memory modules, connections to the host processor, or software modules, we consider the problem of distributing the resource units to processors in a hypercube computer so that certain performance requirements are met at minimal cost. Typical requirements include the condition that every processor is within a given distance of a resource unit, that every processor is within a given distance of each of several resources, and that every m-dimensional subcube contains a resource unit. The latter is particularly important in a multiuser system in which different users are given their own subcubes. In this setting, we also consider the problem of meeting the performance requirements at minimal cost when the subcube allocation system cannot allocate all possible subcubes and the requirements apply only to allocable subcubes. We also analyze the problem of partitioning processors with resources into different classes, requiring that every processor is within a given distance of, or in a subcube of given dimension with, a member of each class. Efficient constructive techniques for distributing or partitioning a resource are given for several performance requirements, along with upper and lower bounds on the total number of resource units required.
Information & Computation | 1993
Niall Graham; Frank Harary; Marilynn Livingston; Quentin F. Stout
Abstract We consider the problem of determining the minimum number of faulty processors, K ( n , m ), and of faulty links, λ( n , m ), in an n -dimensional hypercube computer so that every m -dimensional subcube is faulty. Best known lower bounds for K ( n , m ) and λ( n , m ) are proved, several new recursive inequalities and new upper bounds are established, their asymptotic behavior for fixed m and for fixed n − m is analyzed, and their exact values are determined for small n and m . Most of the methods employed show how to construct sets of faults attaining the bounds. An extensive survey of related work is also included, showing connections to resource allocation, k -independent sets, and exhaustive testing.
Mathematical and Computer Modelling | 1988
Marilynn Livingston; Quentin F. Stout
One important aspect of efficient use of a hypercube computer to solve a given problem is the assignment of subtasks to processors in such a way that the communication overhead is low. The subtasks and their inter-communication requirements can be modeled by a graph, and the assignment of subtasks to processors viewed as an embedding of the task graph into the graph of the hypercube network. We survey the known results concerning such embeddings, including expansion/dilation tradeoffs for general graphs, embeddings of meshes and trees, packings of multiple copies of a graph, the complexity of finding good embeddings, and critical graphs which are minimal with respect to some property. In addition, we describe several open problems.
Applied Mathematics Letters | 1993
Frank Harary; Marilynn Livingston
Abstract The use of hypercube graphs as the underlying architecture in many commercial parallel computers has stimulated interest in this family of graphs. We hope to further stimulate this interest by introducing a tantalizing unsolved problem that is based on dominating sets for this very regularly structured family.
distributed memory computing conference | 1991
Marilynn Livingston; Quentin F. Stout
This paper examines the problem of locating large fault-free subcubes in multiuser hypercube systems. We analyze a new location strategy, the cyclic buddy system, and compare its performance to the buddy system, the gray-coded buddy system, and several variants of them. We show that the cyclic buddy system gives a striking improvement in expected fault tolerance over the above schemes and, since it can easily be implemented in parallel with little overhead, it provides an attractive alternative to these schemes. We also investigate the behavior of these location systems in the folded, or projective, hypercube, and find that the cyclic buddy system, which adapts naturally to this enhancement, significantly outperforms the other schemes. A combination of analytic techniques and simulation is used to examine both worst case and expected case performance.
The Journal of Supercomputing | 2004
Bader F. AlBdaiwi; Marilynn Livingston
A perfect distance-d placement distributes a limited number of resources in a multicomputer parallel system so that every non-resource node is within a distance d or less from exactly one resource node. In this paper, we prove a necessary and sufficient condition for the existence of perfect distance-d placements in 2D toroidal networks. Furthermore, we describe how to generate these placements when they exist.
Lecture Notes in Computer Science | 1999
Marilynn Livingston; Virginia Mary Lo; Daniel Zappala; Kurt J. Windisch
This paper presents a new hierarchical multicast address allocation scheme for use in interdomain multicast. Our scheme makes use of masks that are contiguous but not prefix-based to provide significant improvements in performance. Our Cyclic Block Allocation (CBA) scheme shares some similarities with both Reverse Bit Expansion and kampai, but overcomes many shortcomings associated with these earlier techniques by exploiting techniques from the area of subcube allocation for hypercubes. Through static analysis and dynamic simulations, we show that CBA has the following characteristics that make it an excellent candidate for practical use in interdomain multicast protocols: better address utilization under dynamic requests and releases than other schemes; low blocking time; efficient routing tables; addresses reflect domain hierarchy; and compatibility with MASC architecture.
symposium on frontiers of massively parallel computation | 1988
Marilynn Livingston; Quentin F. Stout
The author examines the problem of locating and allocating large fault-free subsystems in multiuser massively parallel computer systems. Since the allocation schemes used in such large systems cannot allocate all possible subsystems a reduction in fault tolerance is experienced. The effects of different allocation methods, including the buddy and Gray-coded buddy schemes for the allocation of subsystems in the hypercube and in the two-dimensional mesh and torus are analyzed. Both worst-case and expected-case performance are studied. Generalizing the buddy and Gray-coded systems, a family of allocation schemes which exhibit a significant improvement in fault tolerance over the existing schemes and which use relatively few additional resources is introduced. For purposes of comparison, the behavior of the various schemes on the allocation of subsystems of 2/sup 18/ processors in the hypercube, mesh, and torus consisting of 2/sup 20/ processors is studied. The methods involve a combination of analytical techniques and simulation.<<ETX>>
workshop on parallel and distributed simulation | 1997
Kevin Glass; Marilynn Livingston; John S. Conery
Large-scale ecological simulations are natural candidates for distributed discrete event simulation. In optimistic simulation of spatially explicit models, a difficult problem arises when individuals migrate between physical regions simulated by different logical processes. We present a solution to this problem that uses shared object states. Shared states allow for efficient communication between LPs and for early detection of canceled events. We briefly describe an optimistic simulation environment called EcoKit, which operates on top of the WarpKit implementation of Time Warp. Our experiments with this system on a shared memory multiprocessor show that EcoKit promises to scale well both with the number of processors and the number of individuals simulated.
workshop on i/o in parallel and distributed systems | 1999
Jens Mache; Virginia Mary Lo; Marilynn Livingston; Sharad K. Garg
Input/Output is a big obstacle to effective use of tenflopsscale computing systems, Motivated by earlier parallel I/O meaurements on an Intel TFLOPS machine, we conduct studies to determine the sensitivity of parallel I/O performance on multi-progmmmed mesh-connected machines with respect to number of I/O nodes, number of compute nodes, network link bandwidth, I/O node bandwidth, spatial layout of jobs, and read or write demands of applications. Our extensive simulations and analytical modeling yield important insights into the limitations on parallel I/O performance due to network contention, and into the possible gains in parallel I/O performance that can be achieved by tuning the spatial layout of jobs. Applying these results, we devise a new processor allocation strategy that is sensitive to parallel I/O traffic and the resulting network contention. In performance evaluations driven by synthetic workloads and by a real workload trace captured at the San Diego Supercomputing Center, the new strategy improves the average response time of parallel I/O intensive jobs by up to a factor of 4.5.