Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohan Rajagopalan is active.

Publication


Featured researches published by Mohan Rajagopalan.


programming language design and implementation | 2009

Programming model for a heterogeneous x86 platform

Bratin Saha; Xiaocheng Zhou; Hu Chen; Ying Gao; Shoumeng Yan; Mohan Rajagopalan; Jesse Fang; Peinan Zhang; Ronny Ronen; Avi Mendelson

The client computing platform is moving towards a heterogeneous architecture consisting of a combination of cores focused on scalar performance, and a set of throughput-oriented cores. The throughput oriented cores (e.g. a GPU) may be connected over both coherent and non-coherent interconnects, and have different ISAs. This paper describes a programming model for such heterogeneous platforms. We discuss the language constructs, runtime implementation, and the memory model for such a programming environment. We implemented this programming environment in a x86 heterogeneous platform simulator. We ported a number of workloads to our programming environment, and present the performance of our programming environment on these workloads.


IEEE Transactions on Dependable and Secure Computing | 2006

System Call Monitoring Using Authenticated System Calls

Mohan Rajagopalan; Matti A. Hiltunen; Trevor Jim; Richard D. Schlichting

System call monitoring is a technique for detecting and controlling compromised applications by checking at runtime that each system call conforms to a policy that specifies the programs normal behavior. Here, we introduce a new approach to implementing system call monitoring based on authenticated system calls. An authenticated system call is a system call augmented with extra arguments that specify the policy for that call, and a cryptographic message authentication code that guarantees the integrity of the policy and the system call arguments. This extra information is used by the kernel to verify the system call. The version of the application in which regular system calls have been replaced by authenticated calls is generated automatically by an installer program that reads the application binary, uses static analysis to generate policies, and then rewrites the binary with the authenticated calls. This paper presents the approach, describes a prototype implementation based on Linux and the PLTO binary rewriting system, and gives experimental results suggesting that the approach is effective in protecting against compromised applications at modest cost


dependable systems and networks | 2005

Authenticated system calls

Mohan Rajagopalan; Matti A. Hiltunen; Trevor Jim; Richard D. Schlichting

System call monitoring is a technique for detecting and controlling compromised applications by checking at runtime that each system call conforms to a policy that specifies the programs normal behavior. A new approach to system call monitoring based on authenticated system calls is introduced. An authenticated system call is a system call augmented with extra arguments that specify the policy for that call and a cryptographic message authentication code (MAC) that guarantees the integrity of the policy and the system call arguments. This extra information is used by the kernel to verify the system call. The version of the application in which regular system calls have been replaced by authenticated calls is generated automatically by an installer program that reads the application binary, uses static analysis to generate policies, and then rewrites the binary with the authenticated calls. This paper presents the approach, describes a prototype implementation based on Linux and the PLTO binary rewriting system, and gives experimental results suggesting that the approach is effective in protecting against compromised applications at modest cost.


programming language design and implementation | 2002

Profile-directed optimization of event-based programs

Mohan Rajagopalan; Saumya K. Debray; Matti A. Hiltunen; Richard D. Schlichting

Events are used as a fundamental abstraction in programs ranging from graphical user interfaces (GUIs) to systems for building customized network protocols. While providing a flexible structuring and execution paradigm, events have the potentially serious drawback of extra execution overhead due to the indirection between modules that raise events and those that handle them. This paper describes an approach to addressing this issue using static optimization techniques. This approach, which exploits the underlying predictability often exhibited by event-based programs, is based on first profiling the program to identify commonly occurring event sequences. A variety of techniques that use the resulting profile information are then applied to the program to reduce the overheads associated with such mechanisms as indirect function calls and argument marshaling. In addition to describing the overall approach, experimental results are given that demonstrate the effectiveness of the techniques. These results are from event-based programs written for X Windows, a system for building GUIs, and Cactus, a system for constructing highly configurable distributed services and network protocols.


workshop on declarative aspects of multicore programming | 2007

A proposal for parallel self-adjusting computation

Matthew A. Hammer; Umut A. Acar; Mohan Rajagopalan; Anwar M. Ghuloum

We present an overview of our ongoing work on parallelizing self-adjusting-computation techniques. In self-adjusting computation, programs can respond to changes to their data (e.g., inputs, outcomes of comparisons) automatically by running a change-propagation algorithm. This ability is important in applications where inputs change slowly over time. All previously proposed self-adjusting computation techniques assume a sequential execution model. We describe techniques for writing parallel self-adjusting programs and a change propagation algorithm that can update computations in parallel. We describe a prototype implementation and present preliminary experimental results.


international xml database symposium | 2009

A Data Parallel Algorithm for XML DOM Parsing

Bhavik Shah; Praveen Rao; Bongki Moon; Mohan Rajagopalan

The extensible markup language XML has become the de facto standard for information representation and interchange on the Internet. XML parsing is a core operation performed on an XML document for it to be accessed and manipulated. This operation is known to cause performance bottlenecks in applications and systems that process large volumes of XML data. We believe that parallelism is a natural way to boost performance. Leveraging multicore processors can offer a cost-effective solution, because future multicore processors will support hundreds of cores, and will offer a high degree of parallelism in hardware. We propose a data parallel algorithm called ParDOM for XML DOM parsing, that builds an in-memory tree structure for an XML document. ParDOM has two phases. In the first phase, an XML document is partitioned into chunks and parsed in parallel. In the second phase, partial DOM node tree structures created during the first phase, are linked together (in parallel) to build a complete DOM node tree. ParDOM offers fine-grained parallelism by adopting a flexible chunking scheme --- each chunk can contain an arbitrary number of start and end XML tags that are not necessarily matched. ParDOM can be conveniently implemented using a data parallel programming model that supports map and sort operations. Through empirical evaluation, we show that ParDOM yields better scalability than PXP [23] --- a recently proposed parallel DOM parsing algorithm --- on commodity multicore processors. Furthermore, ParDOM can process a wide-variety of XML datasets with complex structures which PXP fails to parse.


languages and compilers for parallel computing | 2007

Pillar: A Parallel Implementation Language

Todd A. Anderson; Neal Glew; Peng Guo; Brian T. Lewis; Wei Liu; Zhanglin Liu; Leaf Petersen; Mohan Rajagopalan; James M. Stichnoth; Gansha Wu; Dan Zhang

As parallelism in microprocessors becomes mainstream, new prog- ramming languages and environments are emerging to meet the challenges of parallel programming. To support research on these languages, we are developing a low-level language infrastructure called Pillar(derived from Parallel Implementation Language). Although Pillar programs are intended to be automatically generated from source programs in each parallel language, Pillar programs can also be written by expert programmers. The language is defined as a small set of extensions to C. As a result, Pillar is familiar to C programmers, but more importantly, it is practical to reuse an existing optimizing compiler like gcc [1] or Open64 [2] to implement a Pillar compiler. Pillars concurrency features include constructs for threading, synchronization, and explicit data-parallel operations. The threading constructs focus on creating new threads only when hardware resources are idle, and otherwise executing parallel work within existing threads, thus minimizing thread creation overhead. In addition to the usual synchronization constructs, Pillar includes transactional memory. Its sequential features include stack walking, second-class continuations, support for precise garbage collection, tail calls, and seamless integration of Pillar and legacy code. This paper describes the design and implementation of the Pillar software stack, including the language, compiler, runtime, and high-level converters(that translate high-level language programs into Pillar programs). It also reports on early experience with three high-level languages that target Pillar.


Software - Practice and Experience | 2003

QoS customization in distributed object systems

Jun He; Matti A. Hiltunen; Mohan Rajagopalan; Richard D. Schlichting

Applications built on networked collections of computers are increasingly using distributed object platforms such as CORBA,Java Remote Method Invocation (RMI), and DCOM to standardize object interactions. With this increased use comes the increased need for enhanced quality of service (QoS) attributes related to fault tolerance, security, and timeliness. This paper describes an architecture called CQoS (configurable QoS) for implementing such enhancements in a transparent, highly customizable, and portable manner. CQoS consists of two parts: application‐ and platform‐dependent interceptors and generic QoS components. The generic QoS components are implemented using Cactus, a system for building highly configurable protocols and services in distributed systems. The CQoS architecture and the interfaces between the different components are described, together with implementations of QoS attributes using Cactus and interceptors for CORBA and Java RMI. Experimental results are given for a test application executing on a Linux cluster using Cactus/J, the Java implementation of Cactus. Compared with other approaches, CQoS emphasizes portability across different distributed object platforms, while the use of Cactus allows custom combinations of fault‐tolerance, security, and timeliness attributes to be realized on a per‐object basis in a straightforward way. Copyright


Lecture Notes in Computer Science | 2001

Providing QoS Customization in Distributed Object Systems

Jun He; Matti A. Hiltunen; Mohan Rajagopalan; Richard D. Schlichting

Applications built on networked collections of computers are increasingly using distributed object platforms such as CORBA, Java RMI, and DCOM to standardize object interactions. With this increased use comes the increased need for enhanced Quality of Service (QoS) attributes related to fault tolerance, security, and timeliness. This paper describes an architecture called CQoS (Configurable QoS) for implementing such enhancements in a transparent, highly customizable, and portable manner. CQoS consists of two parts: application- and platform-dependent interceptors and generic QoS components. The generic QoS components are implemented using Cactus, a system for building highly configurable protocols and services in distributed systems. The CQoS architecture and the interfaces between the different components are described, together with implementations of QoS attributes using Cactus and interceptors for CORBA and Java RMI. Experimental results are given for a test application executing on a Linux cluster using Cactus/J, the Java implementation of Cactus. Compared with other approaches, CQoS emphasizes portability across different distributed object platforms, while the use of Cactus allows custom combinations of fault-tolerance, security and timeliness attributes to be realized on a per-object basis in a straightforward way


Archive | 2014

Shared virtual memory

Hu Chen; Ying Gao; Xiaocheng Zhou; Shoumeng Yan; Peinan Zhang; Mohan Rajagopalan; Jesse Fang; Avi Mendelson; Bratin Saha

Collaboration


Dive into the Mohan Rajagopalan's collaboration.

Researchain Logo
Decentralizing Knowledge