Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nipun Arora is active.

Publication


Featured researches published by Nipun Arora.


automated software engineering | 2011

BEST: A symbolic testing tool for predicting multi-threaded program failures

Malay K. Ganai; Nipun Arora; Chao Wang; Aarti Gupta; Gogul Balakrishnan

We present a tool BEST (Binary instrumentation-based Error-directed Symbolic Testing) for predicting concurrency violations.1 We automatically infer potential concurrency violations such as atomicity violations from an observed run of a multi-threaded program, and use precise modeling and constraint-based symbolic (non-enumerative) search to find feasible violating schedules in a generalization of the observed run. We specifically focus on tool scalability by devising POR-based simplification steps to reduce the formula and the search space by several orders-of-magnitude. We have successfully applied the tool to several publicly available C/C++/Java programs and found several previously known/unknown concurrency related bugs. The tool also has extensive visual support for debugging.


Proceedings of the 2009 ICSE Workshop on Multicore Software Engineering | 2009

COMPASS: A Community-driven Parallelization Advisor for Sequential Software

Simha Sethumadhavan; Nipun Arora; Ravindra Babu Ganapathi; John Demme; Gail E. Kaiser

The widespread adoption of multicores has renewed the emphasis on the use of parallelism to improve performance. The present and growing diversity in hardware architectures and software environments, however, continues to pose difficulties in the effective use of parallelism thus delaying a quick and smooth transition to the concurrency era. In this paper, we describe the research being conducted at Columbia University on a system called COMPASS that aims to simplify this transition by providing advice to programmers while they reengineer their code for parallelism. The advice proffered to the programmer is based on the wisdom collected from programmers who have already parallelized some similar code. The utility of COMPASS rests, not only on its ability to collect the wisdom unintrusively but also on its ability to automatically seek, find and synthesize this wisdom into advice that is tailored to the task at hand, i.e., the code the user is considering parallelizing and the environment in which the optimized program is planned to execute. COMPASS provides a platform and an extensible framework for sharing human expertise about code parallelization - widely, and on diverse hardware and software. By leveraging the “wisdom of crowds” model [30], which has been conjectured to scale exponentially and which has successfully worked for wikis, COMPASS aims to enable rapid propagation of knowledge about code parallelization in the context of the actual parallelization reengineering, and thus continue to extend the benefits of Moores law scaling to science and society.


automated software engineering | 2010

weHelp: A Reference Architecture for Social Recommender Systems

Swapneel Sheth; Nipun Arora; Christian Murphy; Gail E. Kaiser

Recommender systems have become increasingly popular. Most of the research on recommender systems has focused on recommendation algorithms. There has been relatively little research, however, in the area of generalized system architectures for recommendation systems. In this paper, we introduce weHelp: a reference architecture for social recommender systems - systems where recommendations are derived automatically from the aggregate of logged activities conducted by the systems users. Our architecture is designed to be application and domain agnostic. We feel that a good reference architecture will make designing a recommendation system easier; in particular, weHelp aims to provide a practical design template to help developers design their own well-modularized systems.


Archive | 2011

Towards Diversity in Recommendations Using Social Networks

Swapneel Sheth; Jonathan Bell; Nipun Arora; Gail E. Kaiser

While there has been a lot of research towards improving the accuracy of recommender systems, the resulting systems have tended to become increasingly narrow in suggestion variety. An emerging trend in recommendation systems is to actively seek out diversity in recommendations, where the aim is to provide unexpected, varied, and serendipitous recommendations to the user. Our main contribution in this paper is a new approach to diversity in recommendations called“Social Diversity,”a technique that uses social network information to diversify recommendation results. Social Diversity utilizes social networks in recommender systems to leverage the diverse underlying preferences of different user communities to introduce diversity into recommendations. This form of diversification ensures that users in different social networks (who may not collaborate in real life, since they are in a different network) share information, helping to prevent siloization of knowledge and recommendations. We describe our approach and show its feasibility in providing diverse recommendations for the MovieLens dataset.


Archive | 2015

Parikshan: Live Debugging of Production Systems in Isolation

Nipun Arora; Franjo Ivancic; Gail E. Kaiser

Modern 24x7 SOA applications rely on short deployment cycles, and fast bug resolution to maintain their services. Hence, time-to-bug localization is extremely important for any SOA application. We present live debugging , a mechanism which allows debugging of production systems (run test-cases, debug, or profile, etc.) on-the-fly. We leverage user-space virtualization technology (OpenVZ/LXC), to launch containers cloned and migrated from running instances of an application, thereby having two containers: production (which provides the real output), and debug (for debugging). The debug container provides a sandbox environment for debugging without any perturbation to the production environment. Customized networkproxy agents replicate or replay network inputs from clients to both the production and debug-container, as well as safely discard all network output from the debug-container. We used our system, called Parikshan, to do live debugging on several real-world bugs, and effectively reduced debugging complexity and time.


Archive | 2015

Setting budgets for live debugging

Nipun Arora; Abhishek Sharma; Gail E. Kaiser

Live debugging aims to automate the process of isolating bugs by providing a framework to pinpoint the likely causes of program errors within the production environment. It does so by cloning production application containers, and allowing for on-the-fly sandboxed debugging of user-input, without impacting the actual production system. In this paper, we investigate the use of overhead budgets for practical on-the-fly debugging. We formulate our problem using queuing theory, and show how it can give us an approximate budget limits. Debugging large-scale distributed systems is a well-documented complex problem. We evaluate our approach by running simulations using our model. Our results indicate that using budget allocations as an upper limit for debugging gives substantial improvements in terms of the debugging time available to the developer.


automated software engineering | 2018

Replay without recording of production bugs for service oriented applications

Nipun Arora; Jonathan Bell; Franjo Ivancic; Gail E. Kaiser; Baishakhi Ray

Short time-to-localize and time-to-fix for production bugs is extremely important for any 24x7 service-oriented application (SOA). Debugging buggy behavior in deployed applications is hard, as it requires careful reproduction of a similar environment and workload. Prior approaches for automatically reproducing production failures do not scale to large SOA systems. Our key insight is that for many failures in SOA systems (e.g., many semantic and performance bugs), a failure can automatically be reproduced solely by relaying network packets to replicas of suspect services, an insight that we validated through a manual study of 16 real bugs across five different systems. This paper presents Parikshan, an application monitoring framework that leverages user-space virtualization and network proxy technologies to provide a sandbox “debug” environment. In this “debug” environment, developers are free to attach debuggers and analysis tools without impacting performance or correctness of the production environment. In comparison to existing monitoring solutions that can slow down production applications, Parikshan allows application monitoring at significantly lower overhead.


Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering | 2010

The weHelp reference architecture for community-driven recommender systems

Swapneel Sheth; Nipun Arora; Christian Murphy; Gail E. Kaiser

Recommender systems have become increasingly popular. Most research on recommender systems has focused on recommendation algorithms. There has been relatively little research, however, in the area of generalized system architectures for recommendation systems. In this paper, we introduce weHelp - a reference architecture for social recommender systems. Our architecture is designed to be application and domain agnostic, but we briefly discuss here how it applies to recommender systems for software engineering.


Archive | 2014

Offline queries in software defined networks

Hui Zhang; Behnaz Arzani; Franjo Ivancic; Junghwan Rhee; Nipun Arora; Guofei Jiang


Archive | 2011

POWER: Parallel Optimizations With Executable Rewriting

Nipun Arora; Jonathan Bell; Martha A. Kim; Vishal Singh; Gail E. Kaiser

Collaboration


Dive into the Nipun Arora's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Murphy

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chao Wang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge