Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kaushik Ghosh is active.

Publication


Featured researches published by Kaushik Ghosh.


Operating Systems Review | 1994

A machine independent interface for lightweight threads

Bodhisattwa Mukherjee; Greg Eisenhauer; Kaushik Ghosh

Recently, lightweight thread libraries have become a common entity to support concurrent programming on shared memory multiprocessors. However, the disparity between primitives offered by operating systems creates a challenge for those who wish to create portable lightweight thread packages. What should be the interface between the machine-independent and machine-dependent parts of the thread library? We have implemented a portable lightweight thread library on top of Unix on a KSR-1 supercomputer, BBN Butterfly multiprocessor, SGI multiprocessor, Sequent multiprocessor and Sun 3/4 family of uniprocessors. This paper first compares the nature and performance of the OS primitives offered by these machines. We then present procedure-level abstraction that is efficiently implementable on all the architectures and is a sufficient base upon which a user-level thread package can be built.


workshop on parallel and distributed simulation | 1994

PORTS: a parallel, optimistic, real-time simulator

Kaushik Ghosh; Richard M. Fujimoto; Karsten Schwan

This paper describes issues concerning the design of an optimistic parallel discrete event simulation system that executes in environments that impose real-time constraints on the simulators execution. Two key problems must be addressed by such a system. First the timing characteristics of the parallel simulator must be sufficiently predictable to allow one to guarantee that real-time deadlines for completing simulation computations will be met. Second, the optimistic computation must be able to interact with its surrounding environment with as little latency as possible, necessitating rapid commitment of I/O operations. To address the first question, we show that optimistic simulators that never send incorrect messages (sometimes called “aggressive-no-risk” simulators) provide sufficient predictability to allow traditional schedulability analysis techniques commonly used in real-time systems to be applied. We show that incremental state saving techniques introduce sufficient unpredictability that they are not well-suited for real-time environments. We observe that the traditional “lowest timestamp first” scheduling policy used in many optimistic parallel simulation systems is an optimal (in the real-time sense) scheduling algorithm when event timestamps and real-time deadlines are the same. Finally, to address the question for rapid commitment of I/O operations, we utilize a continuous GVT computation scheme for shared-memory multiprocessors where a new value of GVT is computed after processing each event in the simulation. These ideas are incorporated in a parallel, optimistic, real-time simulation system called PORTS. Initial performance measurements of the shared-memory based PORTS system executing on a Kendall Square Research multiprocessor are presented. Initial performance results are encouraging, demonstrating that PORTS achieves performance approaching that of a conventional Time Warp system for the benchmark programs that were tested.


international conference on supercomputing | 1996

Evaluating the limits of message passing via the shared attraction memory on CC-COMA machines: experiences with TCGMSG and PVM

Kaushik Ghosh; Stephen R. Breit

We discuss schemes for efficiently implementing the primitives of two commonly-used message-passing packages – PVM and TCGMSG for cache-coherent cache-only memory access (CC-COMA ) machines, using the attraction memory on these machines to advantage. We fist describe a generic interface for message-passing and buffering, and map the specific calls of these two packages on to this generic interface. We derive analytical results about the achievable bandwidth for message-passing via the shared memory on CC-COMA machines, pointing out the problems that arise due to the use of such architecture. We then show how these problems can be tackled by latency-biding techniques available on most CC-COMA machines. We report the performance of our implementation of each of the two libraries. Finally, we suggest some new features to the system software on multiprocessors which might support such packages more efficiently, and point out some drawbacks in the interfaces of the packages which hinder their efficient implementation on multiprocessors. The K SR supercomput er is used as a running example throughout the paper. ● Authors’ current addresses Kaush]k Ghosh, Sdicon Graphics Inc , Mountain View, CA 94043, Stephen R Brelt, Dragon Systems Inc., Newton, MA 02160 Thx+ work was performed under the supervision of Stephen Brelt whale Kaushik Ghosh was on an mternshlp at Kendall Square Research (July September ’93) Denms Marsa Implemented an earher version of TCGMSG on the KSR1 o Permission to make digitallhard copies of all or part of this material for peraoml or classroom use ia granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the. copyright notice, the title of the publication and its date appear, and notice is given tha} cop yrigbt is by permission of ~G.ACM, IflG. To Gopy otherwise, to repubbsb, to post on servers or to redwtnbute to ksta, requms specific permission and/or fee. ICS’96, Philadelphia, PA, USA @1996 ACM ()+9791+03.7/96/()5. .


international conference on parallel processing | 1994

Parallel Discrete Event Simulation Using Space-Time Memory

Kaushik Ghosh; Richard M. Fujimoto

3.50 1 Motivation and Contributions In recent years, distributed computing has been widely used for high throughput at low cost. Message-passing is a low level of communication software on a network of computers. While powerful high-level abstractions like Distributed Shared Memory can be built up over a layer of message passing software, raw, low-level message-passing usually produces the best performance. Several message-passing libraries are available today: CPS [4], Linda [1], TCGMSG [6] and PVM [11], to name a few. Most of these libraries are complete paraJ.lelprogramming environments, rather than mere low-level communication software. There is now an increasing number of application programs that have been parallelized using these “standard” message-passing interfaces. Further, the comparative overheads of message-passing vs. shared memory differ from one application to another [3, 8]. Therefore, it is desirable to support common message-passing interfaces even on shared-memory multiprocessors, It is well known that uniform-memory-access machines, which are typically bus-based, do not scale well. Two main types of non-uniform memory access shared-memory archltectures have been proposed. Cache coherent NUMA (CCNUMA) machines have a permanent ‘home’ address for each location. Memory locations can be cached in individual processor caches and a cache coherence scheme is used to keep memory in a valid state. In comparison, cache-coherent cache-only memory access (CC-COMA) architectures have no fixed home associated with memory locations. Locations are replicated and migrated at the main-memory level, which is structured as a large attraction memorg [7]. CC-NUMA can provide good performance through a combination of page level migration and replication (done in system software) if data structures can be partitioned across the available processor memories so that there is little or no remote data access, and such sharing happens on a coarse granularity. However, if the shared accesses are fine-grained, and access patterns are dynamic, system software typically finds it difficult to cope. As a result, CCCOMA tends to perform better than CC-NUMA under such circumstances [l O]. There is a large class of irregular applications that belong to this latter category. Therefore,


Archive | 1993

Experimentation with Configurable, Lightweight Threads on a KSR Multiprocessor

Kaushik Ghosh; Bodhisattwa Mukherjee


Archive | 1993

A testbed for optimistic execution of real-time simulations

Kaushik Ghosh; Richard M. Fujimoto; Karsten Schwan


Archive | 1993

A Survey of Real-Time Operating Systems -- Draft

Bodhisattwa Mukherjee; Karsten Schwan; Kaushik Ghosh


Archive | 1993

A Survey of Real-Time Operating Systems -- Preliminary Draft

Bodhisattwa Mukherjee; Karsten Schwan; Kaushik Ghosh


Archive | 1994

PORTS: Experiences with a Scheduler for Dynamic Real-Time Systems

Kaushik Ghosh; Richard M. Fujimoto; Karsten Schwan


Concurrency and Computation: Practice and Experience | 1999

Composing high-performance schedulers: a case study from real-time simulation

Kaushik Ghosh; Richard M. Fujimoto; Karsten Schwan

Collaboration


Dive into the Kaushik Ghosh's collaboration.

Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard M. Fujimoto

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bodhisattwa Mukherjee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Greg Eisenhauer

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge