Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rakesh Jha is active.

Publication


Featured researches published by Rakesh Jha.


IEEE Transactions on Software Engineering | 1989

Ada program partitioning language: a notion for distributing Ada programs

Rakesh Jha; J. M. Kamrad; Dennis Cornhill

Ada Program Partitioning Language (APPL) has been designed as part of Honeywells Distributed Ada project. The goal of the project is to develop an approach for reducing the complexity of building distributed applications in Ada. In the proposed approach, an application is written as a single Ada program using the full capabilities of the Ada language. It is not necessary to factor the underlying hardware configuration into the program design. Once the program has been completed and tested in the host development environment, it is partitioned into fragments and mapped onto the distributed hardware. The partitioning and mapping are expressed in APPL and do not require changes to the Ada source. The main thrusts of the project include the design of APPL and the development of language translation tools and the run-time system to support Ada and APPL for a distributed target. The authors present an overview of APPL, the goals considered in the design, and issues that impact its implementation. >


international parallel processing symposium | 1995

SPI: an instrumentation development environment for parallel/distributed systems

Devesh Bhatt; Rakesh Jha; Todd Steeves; Rashmi Bhatt; David Wills

This paper presents an overview of the Scalable Parallel Instrumentation (SPI) tool being developed at Honeywell. SPI provides a complete development and execution environment for developing real-time instrumentation functions for heterogeneous parallel/distributed systems. This includes: C-extensions and development tools for the event-action programming model, run-time support for transparent event-action execution on a heterogeneous distributed platform, and a library of primitives (actions) ranging from real-lime data collection, analysis to graphic display. Concurrent instrumentation functions can be flexibly parallelized/distributed over the heterogeneous platform to selectively analyze and display desired activity at the hardware, OS, IPC, and application levels. SPI is currently operational on a heterogeneous platform of SUN workstations and Intel Paragon.<<ETX>>


conference on high performance computing (supercomputing) | 1996

The C31 parallel benchmark suite - introduction and preliminary results

Rakesh Jha; Richard C. Metzger; Brian VanVoorst; Luiz Pires; Wing Au; Minesh B. Amin; David A. Castanon; Vipin Kumar

Current parallel benchmarks, while appropriate for scientific applications, lack the defense relevance and representativeness for developers who are considering parallel computers for their Command, Control, Communication, and Intelligence (C3I) systems. We present a new set of compact application benchmarks which are specific to the C3I application domain. The C3I Parallel Benchmark Suite (C3IPBS) program is addressing the evaluation of not only machine performance, but also the software implementation effort. Our methodology currently draws heavily from the PARKBENCH[2] and NAS Parallel Benchmarks[1]. The paper presents the benchmarking methodology, introduces the benchmarks, and reports initial results and analysis. Finally, we describe the lessons that we have learned so far from formulating and implementing the C3I benchmarks.


IEEE Computer | 2001

Real-time adaptive resource management

Allalaghatta Pavan; Rakesh Jha; Lee Graba; Saul Cooper; Ionut Cardei; Mihaela Cardei; Vipin Gopal; Sanjay Parthasarathy; Saad Bedros

Distributed mission-critical environments employ a mixture of hard and soft real-time applications that usually expect a guaranteed range of quality of service (QoS). These applications have different levels of criticality and varied structures ranging from periodic independent tasks to distributed pipelines or event-driven modules. The underlying distributed system must evolve and adapt to the high variability in resource demands that competing applications impose. The current industry trend is to use commercial off-the-shelf (COTS) hardware and software components to build distributed environments for mission-critical applications. The paper considers how adding a middleware layer above the COTS components facilitates consistent management of system resources, decreases system complexity, and reduces development costs.


international symposium on object component service oriented real time distributed computing | 2000

Hierarchical feedback adaptation for real time sensor-based distributed applications

Mihaela Cardei; Ionut Cardei; Rakesh Jha; Allalaghatta Pavan

The paper presents an innovative hierarchical feedback adaptation method that efficiently controls the dynamic QoS behavior of real time distributed data flow applications, such as sensor based data streams or mission-critical command and control applications. We applied this method in the context of the Real Time Adaptive Resource Management system, a middleware integrated services, developed at the Honeywell Technology Center. We present the analytical model for Automatic Target Recognition pipeline application and the impact of hierarchical feedback adaptation on the application behavior and its QoS parameters.


Lecture Notes in Computer Science | 2000

Hierarchial architecture for real-time adaptive resource management

Ionut Cardei; Rakesh Jha; Mihaela Cardei; Allalaghatta Pavan

This paper presents the Real Time Adaptive Resource Management system (RTARM 1), developed at the Honeywell Technology Center. RTARM supports provision of integrated services for real-time distributed applications and offers management services for end-to-end QoS negotiation, QoS adaptation, real-time monitoring and hierarchical QoS feedback adaptation. In this paper, we focus on the hierarchical architecture of RTARM, its flexibility, internal mechanisms and protocols that enable management of resources for integrated services. The architecture extensibility is emphasized with the description of several service managers, including an object wrapper build around the NetEx real-time network resource management. We use practical experiments with a distributed Automatic Target Recognition application and a synthetic pipeline application to illustrate the impact of RTARM on the application behavior and to evaluate the system performance.


ieee international conference on high performance computing data and analytics | 1996

Adaptive resource allocation for embedded parallel applications

Rakesh Jha; Mustafa Muhammad; Sudhakar Yalamanchili; Karsten Schwan; Daniela Ivan-Rosu; Chris deCastro

Parallel and distributed computer architectures are increasingly being considered for application in a wide variety of computationally intensive embedded systems. Many such applications impose highly dynamic demands for resources (processors, memory, and communication network), because their computations are data-dependent, or because the applications must constantly interact with a rapidly changing physical environment, or because the applications themselves are adaptive. This paper presents a set of dynamic resource allocation techniques aimed at maintaining high levels of application performance in the presence of varying resource demands. It focuses on a class of applications structured as multiple pipelines of data-parallel stages, as this structure is common to many sensor-based applications. We discuss the issues involved in resource management for such applications, and present preliminary results from our implementations on Intel Paragon. Our approach uses feedback control-a real-time monitoring system is used to detect significant performance shortfalls, and resources are reallocated among the application components in an attempt to improve performance. The main contribution of this work is that it combines real-time monitoring of an applications performance with dynamic resource allocation, and focuses on practical implementations rather than simulation and analysis.


ACM Sigada Ada Letters | 1989

An implementation supporting distributed execution of partitioned ada programs

Rakesh Jha; Greg Eisenhauer; J. M. Kamrad; Dennis Cornhill

This paper describes the implementation of a novel paradigm for building distributed application software in Ada. The entire application is written as a single program, which is partitioned for distributed execution after its design. The partitioning is expressed in a separate notation called the Ada Program Partitioning Language (APPL). A modified compilation system accepts an Ada program and an APPL specification for it as input, to produce a separate executable image for each node. There is considerable freedom in what Ada entities may be placed on different nodes. The two-phase design approach helps reduce design complexity, and allows experimentation with different strategies for allocating software to hardware without requiring software redesign. Our implementation is currently under the final stages of testing using a modified version of the Ada compiler validation test-suite.Section 1 presents an overview of our paradigm and the rationale behind it. Section 2 introduces APPL. Section 3 describes our current implementation. Section 4 discusses the question of what Ada entities should be distributable, and compares several alternatives. Finally, section 5 concludes the paper with pointers to future work.


international parallel processing symposium | 1997

Implementation and results of hypothesis testing from the C/sup 3/I parallel benchmark suite

B. Van Voorst; Rakesh Jha; Luiz Pires; M. Muhammad

This paper describes the implementation of the hypothesis testing benchmark, one of ten kernels from the C/sup 3/I (Command, Control, Communications and Intelligence) Parallel Benchmark Suite (C/sup 3/IPBS)/sup 1/. The benchmark was implemented and executed on a variety of parallel environments. This paper details the run times obtained with these implementations, and offers an analysis of the results.


international workshop on real-time ada issues | 1990

Parallel Ada: issues in programming and implementation

Rakesh Jha

The class of applications of interest is where parallelism is exploited for the sole purpose of execution speedup. For example, large scale numerical problems need a high degree of para.llelism to reduce computation time. It is argued that it is difficult to program such applications in Ada, owing to the nature of its mechanisms for intertask communication. The major problems are iack of adequate support for shared data, and that the one-to-one explicit communication forced by rendezvous leads to needlessly complex programs.

Collaboration


Dive into the Rakesh Jha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ionut Cardei

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mihaela Cardei

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge