Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vikram A. Saletore is active.

Publication


Featured researches published by Vikram A. Saletore.


conference on high performance computing (supercomputing) | 1993

Self-scheduling on distributed-memory machines

Jie Liu; Vikram A. Saletore

The authors present a general approach of self-scheduling a non-uniform parallel loop on a distributed-memory machine. The approach has two phases: a static scheduling phase and a dynamic scheduling phase. In addition to reduce scheduling overhead, using the static scheduling phase allows the data needed by the statically scheduled iterations to be prefetched. The dynamic scheduling phase balances the workload. Data distribution methods for self-scheduling are also the focus of this paper. The authors classify the data distribution methods into four categories and present partial duplication, a method that allows the problem size to grow linearly in the number of processors. The experiments conducted on a 64-node NCUBE show that as much as 79% improvement is achieved over static scheduling on the generation of a false-color image.


Proceedings of the US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications | 1992

Prioritization in Parallel Symbolic Computing

Laxmikant V. Kalé; Balkrishna Ramkumar; Vikram A. Saletore; Amitabh Sinha

It is argued that scheduling is an important determinant of performance for many parallel symbolic computations, in addition to the issues of dynamic load balancing and grain size control. We propose associating unbounded levels of priorities with tasks and messages as the mechanism of choice for specifying scheduling strategies. We demonstrate how priorities can be used in parallelizing computations in different search domains, and show how priorities can be implemented effectively in parallel systems. Priorities have been implemented in the Charm portable parallel programming system. Performance results on shared-memory machines with tens of processors and nonshared-memory machines with hundreds of processors are given. Open problems for prioritization in specific domains are given, which will constitute fertile area for future research in this field.


International Journal of Parallel Programming | 1991

Parallel state-space search for a first solution with consistent linear speedups

Laxmikant V. Kalé; Vikram A. Saletore

Consider the problem of exploring a large state-space for a goal state where although many such states may exist in the state-space, finding any one state satisfying the requirements is sufficient. All the methods known until now for conducting such search in parallel using multiprocessors fail to provide consistent linear speedups over sequential execution. The speedups vary between sublinear to superlinear and from one execution to another. Further, adding more processors may sometimes lead to a slow-down rather than speedup, giving rise to speedup anomalies reported in literature. We present a prioritizing strategy which yields consistent speedups that are close toP withP processors, and that monotonically increase with the additon of processors. This is achieved by keeping the total number of nodes expanded during parallel search very close to that of a sequential search. In addition, the strategy requires substantially smaller memory relative to other methods. The performance of this strategy is demonstrated on a multiprocessor with several state-space search problems.


high performance distributed computing | 1994

Parallel computations on the CHARM heterogeneous workstation cluster

Vikram A. Saletore; J. Jacob; M. Padala

In recent years parallel computing on a fast network of high performance and low cost workstations has become a viable and an economic option, compared to that on an expensive high performance parallel supercomputer for solving large Grand Challenge scientific problems. This paper focuses on how to efficiently exploit the computing resources of a set of heterogeneous Unix workstations. We have further developed the CHARM parallel programming environment to allow programs written in the CHARM language execute adjacently on such a cluster. We have also developed a new scheme to schedule tasks statically and balance the load dynamically to achieve high effective utilization. Performance results for several applications programs including ray-tracing, all-pairs shortest path, and matrix multiply on a heterogeneous cluster of Sun Sparcs, IBM RS/6000s and HP-PA 7100s show a significant improvement in execution time over sequential execution.<<ETX>>


International Journal of Parallel Programming | 1994

Safe Self-Scheduling: A Parallel Loop Scheduling Scheme for Shared-Memory Multiprocessors

Jie Liu; Vikram A. Saletore; Theodore G. Lewis

In this papaer was present Safe Self-Scheduling (SSS), a new scheduling scheme that schedules parallel loops with variable length iteration execution times not known at compile time. The scheme assumes a shared memory space. SSS combines static scheduling with dynamic scheduling and draws favorable advantages from each. First, it reduces the dynamic scheduling overhead by statically scheduling a major portion of loop iterations. Second, the workload is balanced with a simple and efficient self-scheduling scheme by applying a new measure, thesmallest critical chore size. Experimental results comparing SSS with other scheduling schemes indicate that SSS surpasses other scheduling schemes. In the experiment on Gauss-Jordan, an application that is suitable for static scheduling schemes, SSS is the only self-scheduling scheme that outperforms the static scheduling scheme. This indicates that SSS achieves a balanced workload with a very small amount of overhead.


ieee international conference on high performance computing data and analytics | 1995

Message-driven parallel computations on the MEIKO CS-2 parallel supercomputer

Vikram A. Saletore; Tony F. Neff

In this paper we focus on how to efficiently program and use the resources of a distributed-memory parallel and vector supercomputer MEIKO CS-2 for scientific applications using CHARM message-driven parallel programming. Distributed-memory parallel computers have communication advantage and thus perform better on applications requiring large amount of communication. We show that with the CHARM message-driven parallel programming one can efficiently overlap communication and computation. Performance data on applications such as matrix multiplication and gaussian elimination show that we achieve good performance.


international conference on parallel processing | 1991

Supporting Machine Independent Programming on Diverse Parallel Architectures.

Wayne Fenton; Balkrishna Ramkumar; Vikram A. Saletore; Amitabh Sinha; Laxmikant V. Kalé


IEEE Transactions on Parallel and Distributed Systems | 1994

The CHARM Parallel Programming Language and System: Part II-The Runtime system

Laxmikant V. Kalé; Balkrishna Ramkumar; Amitabh Sinha; Vikram A. Saletore


NACLP | 1989

Obtaining First Solutions Faster in AND-OR Parallel Execution of Logic Programs.

Vikram A. Saletore; Laxmikant V. Kalé


Archive | 1992

Scheduling parallel loops with variable length iteration execution times on parallel computers

Jianchu N. Liu; Vikram A. Saletore; Ted G. Lewis

Collaboration


Dive into the Vikram A. Saletore's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Liu

Western Oregon University

View shared research outputs
Top Co-Authors

Avatar

J. Jacob

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

M. Padala

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tony F. Neff

Oregon State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge