Amitabh Sinha
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amitabh Sinha.
international parallel processing symposium | 1993
Amitabh Sinha; Laxmikant V. Kalé
Load balancing is a critical factor in achieving optimal performance in parallel applications where tasks are created in a dynamic fashion. In many computations, such as state space search problems, tasks have priorities, and solutions to the computation may be achieved more efficiently if these priorities are adhered to in the parallel execution of the tasks. For such tasks, a load balancing scheme that only seeks to balance load, without balancing high priority tasks over the entire system, might result in the concentration of high priority tasks (even in a balanced-load environment) on a few processors, thereby leading to low priority work being done. In such situations a load balancing scheme is desired which would balance both load and high priority tasks over the system. The authors describe the development of a more efficient prioritized load balancing strategy.<<ETX>>
Proceedings of the US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications | 1992
Laxmikant V. Kalé; Balkrishna Ramkumar; Vikram A. Saletore; Amitabh Sinha
It is argued that scheduling is an important determinant of performance for many parallel symbolic computations, in addition to the issues of dynamic load balancing and grain size control. We propose associating unbounded levels of priorities with tasks and messages as the mechanism of choice for specifying scheduling strategies. We demonstrate how priorities can be used in parallelizing computations in different search domains, and show how priorities can be implemented effectively in parallel systems. Priorities have been implemented in the Charm portable parallel programming system. Performance results on shared-memory machines with tens of processors and nonshared-memory machines with hundreds of processors are given. Open problems for prioritization in specific domains are given, which will constitute fertile area for future research in this field.
international parallel processing symposium | 1994
Laxmikant V. Kalé; Amitabh Sinha
Most parallel programming models provide a single generic mode in which processes can exchange information with each other. However, empirical observation of parallel programs suggests that processes share data in a few distinct and specific modes. We argue that such modes should be identified and explicitly supported in parallel languages and their associated models. The paper describes a set of information sharing abstractions that have been identified and incorporated in the parallel programming language Charm. It can be seen that using these abstractions leads to improved clarity, expressiveness, efficiency, and portability of user programs. In addition, the specificity provided by these abstractions can be exploited at compile-time and at run-time to provide the user with highly refined performance feedback.<<ETX>>
international conference on parallel processing | 1996
Amitabh Sinha; Laxmikant V. Kalé
Most existing performance tools provide generic measurements and visual displays. It is then the responsibility of the users to analyze the performance of their programs using the displayed information. This can be a non-trivial task, because one needs to identify specific pieces of information needed for such analysis. A good performance analysis tool should be able to provide intelligent analysis, and not just feedback, about the performance of a parallel program. Such automatic performance analysis is feasible for programming paradigms that expose sufficient information about program behavior. Charm, a portable, object-based, and message-driven parallel programming language is one such paradigm. We describe the design and implementation of Projections:Expert, a framework for automatic performance analysis for Charm programs.
Computer Physics Communications | 1994
Amitabh Sinha; Klaus Schulten; H. Heller
Abstract EGO is a parallel molecular dynamics program running on Transputers. We conducted a performance analysis of the EGO program in order to determine whether it was effectively using the computational resources of Transputers. Our first concern was whether communication was overlapped with computation, so that the overheads due to communication not overlapped with computation were less. With the assistance of performance tools such as UPSHOT, and with instrumentation of the EGO program itself, we were able to determine that only 8% of the execution time of the EGO program was spent in non-overlapping communication. Our next concern was that the MFLOPS rating of the EGO program was 0.25 MFLOPS, while the Transputers have a sustained rating of 1.5 MFLOPS. We measured MFLOPS ratings of small blocks of OCCAM code and determined that they matched the performance of the EGO code.
international conference on parallel processing | 1991
Wayne Fenton; Balkrishna Ramkumar; Vikram A. Saletore; Amitabh Sinha; Laxmikant V. Kalé
IEEE Transactions on Parallel and Distributed Systems | 1994
Laxmikant V. Kalé; Balkrishna Ramkumar; Amitabh Sinha; Attila Gursoy
IEEE Transactions on Parallel and Distributed Systems | 1994
Laxmikant V. Kalé; Balkrishna Ramkumar; Amitabh Sinha; Vikram A. Saletore
Archive | 1993
Laxmikant V. Kalé; Amitabh Sinha
Archive | 1995
Amitabh Sinha