Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aritra Sengupta is active.

Publication


Featured researches published by Aritra Sengupta.


Ecology Letters | 2013

Modelling dendritic ecological networks in space: an integrated network perspective

Erin E. Peterson; Jay M. Ver Hoef; Dan Isaak; Jeffrey A. Falke; Marie-Jos ee Fortin; Chris E. Jordan; Kristina McNyset; Pascal Monestiez; Aaron S. Ruesch; Aritra Sengupta; Nicholas A. Som; E. Ashley Steel; David M. Theobald; Christian E. Torgersen; Seth J. Wenger

Dendritic ecological networks (DENs) are a unique form of ecological networks that exhibit a dendritic network topology (e.g. stream and cave networks or plant architecture). DENs have a dual spatial representation; as points within the network and as points in geographical space. Consequently, some analytical methods used to quantify relationships in other types of ecological networks, or in 2-D space, may be inadequate for studying the influence of structure and connectivity on ecological processes within DENs. We propose a conceptual taxonomy of network analysis methods that account for DEN characteristics to varying degrees and provide a synthesis of the different approaches within the context of stream ecology. Within this context, we summarise the key innovations of a new family of spatial statistical models that describe spatial relationships in DENs. Finally, we discuss how different network analyses may be combined to address more complex and novel research questions. While our main focus is streams, the taxonomy of network analyses is also relevant anywhere spatial patterns in both network and 2-D space can be used to explore the influence of multi-scale processes on biota and their habitat (e.g. plant morphology and pest infestation, or preferential migration along stream or road corridors).


conference on object oriented programming systems languages and applications | 2013

OCTET: capturing and controlling cross-thread dependences efficiently

Michael D. Bond; Milind Kulkarni; Man Cao; Minjia Zhang; Meisam Fathi Salmi; Swarnendu Biswas; Aritra Sengupta; Jipeng Huang

Parallel programming is essential for reaping the benefits of parallel hardware, but it is notoriously difficult to develop and debug reliable, scalable software systems. One key challenge is that modern languages and systems provide poor support for ensuring concurrency correctness properties - atomicity, sequential consistency, and multithreaded determinism - because all existing approaches are impractical. Dynamic, software-based approaches slow programs by up to an order of magnitude because capturing and controlling cross-thread dependences (i.e., conflicting accesses to shared memory) requires synchronization at virtually every access to potentially shared memory. This paper introduces a new software-based concurrency control mechanism called OCTET that soundly captures cross-thread dependences and can be used to build dynamic analyses for concurrency correctness. OCTET achieves low overheads by tracking the locality state of each potentially shared object. Non-conflicting accesses conform to the locality state and require no synchronization; only conflicting accesses require a state change and heavyweight synchronization. This optimistic tradeoff leads to significant efficiency gains in capturing cross-thread dependences: a prototype implementation of OCTET in a high-performance Java virtual machine slows real-world concurrent programs by only 26% on average. A dependence recorder, suitable for record & replay, built on top of OCTET adds an additional 5% overhead on average. These results suggest that OCTET can provide a foundation for developing low-overhead analyses that check and enforce concurrency correctness.


architectural support for programming languages and operating systems | 2015

Hybrid Static–Dynamic Analysis for Statically Bounded Region Serializability

Aritra Sengupta; Swarnendu Biswas; Minjia Zhang; Michael D. Bond; Milind Kulkarni

Data races are common. They are difficult to detect, avoid, or eliminate, and programmers sometimes introduce them intentionally. However, shared-memory programs with data races have unexpected, erroneous behaviors. Intentional and unintentional data races lead to atomicity and sequential consistency (SC) violations, and they make it more difficult to understand, test, and verify software. Existing approaches for providing stronger guarantees for racy executions add high run-time overhead and/or rely on custom hardware. This paper shows how to provide stronger semantics for racy programs while providing relatively good performance on commodity systems. A novel hybrid static--dynamic analysis called \emph{EnfoRSer} provides end-to-end support for a memory model called \emph{statically bounded region serializability} (SBRS) that is not only stronger than weak memory models but is strictly stronger than SC. EnfoRSer uses static compiler analysis to transform regions, and dynamic analysis to detect and resolve conflicts at run time. By demonstrating commodity support for a reasonably strong memory model with reasonable overheads, we show its potential as an always-on execution model.


programming language design and implementation | 2014

DoubleChecker: efficient sound and precise atomicity checking

Swarnendu Biswas; Jipeng Huang; Aritra Sengupta; Michael D. Bond

Atomicity is a key correctness property that allows programmers to reason about code regions in isolation. However, programs often fail to enforce atomicity correctly, leading to atomicity violations that are difficult to detect. Dynamic program analysis can detect atomicity violations based on an atomicity specification, but existing approaches slow programs substantially. This paper presents DoubleChecker, a novel sound and precise atomicity checker whose key insight lies in its use of two new cooperating dynamic analyses. Its imprecise analysis tracks cross-thread dependences soundly but imprecisely with significantly better performance than a fully precise analysis. Its precise analysis is more expensive but only needs to process a subset of the execution identified as potentially involved in atomicity violations by the imprecise analysis. If DoubleChecker operates in single-run mode, the two analyses execute in the same program run, which guarantees soundness and precision but requires logging program accesses to pass from the imprecise to the precise analysis. In multi-run mode, the first program run executes only the imprecise analysis, and a second run executes both analyses. Multi-run mode trades accuracy for performance; each run of multi-run mode outperforms single-run mode, but can potentially miss violations. We have implemented DoubleChecker and an existing state-of-the-art atomicity checker called Velodrome in a high-performance Java virtual machine. DoubleCheckers single-run mode significantly outperforms Velodrome, while still providing full soundness and precision. DoubleCheckers multi-run mode improves performance further, without significantly impacting soundness in practice. These results suggest that DoubleCheckers approach is a promising direction for improving the performance of dynamic atomicity checking over prior work.


PLOS ONE | 2012

Accounting for Location Error in Kalman Filters: Integrating Animal Borne Sensor Data into Assimilation Schemes

Aritra Sengupta; Scott D. Foster; Toby A. Patterson; Mark V. Bravington

Data assimilation is a crucial aspect of modern oceanography. It allows the future forecasting and backward smoothing of ocean state from the noisy observations. Statistical methods are employed to perform these tasks and are often based on or related to the Kalman filter. Typically Kalman filters assumes that the locations associated with observations are known with certainty. This is reasonable for typical oceanographic measurement methods. Recently, however an alternative and abundant source of data comes from the deployment of ocean sensors on marine animals. This source of data has some attractive properties: unlike traditional oceanographic collection platforms, it is relatively cheap to collect, plentiful, has multiple scientific uses and users, and samples areas of the ocean that are often difficult of costly to sample. However, inherent uncertainty in the location of the observations is a barrier to full utilisation of animal-borne sensor data in data-assimilation schemes. In this article we examine this issue and suggest a simple approximation to explicitly incorporate the location uncertainty, while staying in the scope of Kalman-filter-like methods. The approximation stems from a Taylor-series approximation to elements of the updating equation.


symposium on code generation and optimization | 2017

Legato: end-to-end bounded region serializability using commodity hardware transactional memory

Aritra Sengupta; Man Cao; Michael D. Bond; Milind Kulkarni

Shared-memory languages and systems provide strong guarantees only for well-synchronized (data-race-free) programs. Prior work introduces support for memory consistency based on region serializability of executing code regions, but all approaches incur serious limitations such as adding high run-time overhead or relying on complex custom hardware. This paper explores the potential for leveraging widely available, commodity hardware transactional memory to provide an end-to-end memory consistency model called dynamically bounded region serializability (DBRS). To amortize high per-transaction costs, yet mitigate the risk of unpredictable, costly aborts, we introduce dynamic runtime support called Legato that executes multiple dynamically bounded regions (DBRs) in a single transaction. Legato varies the number of DBRs per transaction on the fly, based on the recent history of committed and aborted transactions. Legato outperforms existing commodity enforcement of DBRS, and its costs are less sensitive to a programs shared-memory communication patterns. These results demonstrate the potential for providing always-on strong memory consistency using commodity transactional hardware.


principles and practice of programming in java | 2015

Toward Efficient Strong Memory Model Support for the Java Platform via Hybrid Synchronization

Aritra Sengupta; Man Cao; Michael D. Bond; Milind Kulkarni

The Java memory model provides strong behavior guarantees for data-race-free executions. However, it provides very weak guarantees for racy executions, leading to unexpected, unintuitive behaviors. This paper focuses on how to provide a memory model, called statically bounded region serializability (SBRS), that is substantially stronger than the Java memory model. Our prior work introduces SBRS, as well as compiler and runtime support for enforcing SBRS called EnfoRSer. EnfoRSer modifies the dynamic compiler to insert instrumentation to acquire a lock on each object accessed by the program. For most programs, EnfoRSers primary run-time cost is executing this instrumentation at essentially every memory access. This paper focuses on reducing the run-time overhead of enforcing SBRS by avoiding instrumentation at every memory access that acquires a per-object lock. We experiment with an alternative approach for providing SBRS that instead acquires a single static lock before each executed region; all regions that potentially race with each other---according to a sound whole-program static analysis--- must acquire the same lock. This approach slows most programs dramatically by needlessly serializing regions that do not actually conflict with each other. We thus introduce a hybrid approach that judiciously combines the two locking strategies, using a cost model and run-time profiling. Our implementation and evaluation in a Java virtual machine use offline profiling and recompilation, thus demonstrating the potential of the approach without incurring online profiling costs. The results show that although the overall performance benefit is modest, our hybrid approach never significantly worsens performance, and for two programs, it significantly outperforms both approaches that each use only one kind of locking. These results demonstrate the potential of a technique based on combining synchronization mechanisms to provide a strong end-to-end memory model for Java and other JVM languages.


parallel computing | 2017

Hybridizing and Relaxing Dependence Tracking for Efficient Parallel Runtime Support

Man Cao; Minjia Zhang; Aritra Sengupta; Swarnendu Biswas; Michael D. Bond

It is notoriously challenging to develop parallel software systems that are both scalable and correct. Runtime support for parallelism—such as multithreaded record and replay, data race detectors, transactional memory, and enforcement of stronger memory models—helps achieve these goals, but existing commodity solutions slow programs substantially to track (i.e., detect or control) an execution’s cross-thread dependencies accurately. Prior work tracks cross-thread dependencies either “pessimistically,” slowing every program access, or “optimistically,” allowing for lightweight instrumentation of most accesses but dramatically slowing accesses that are conflicting (i.e., involved in cross-thread dependencies). This article presents two novel approaches that seek to improve the performance of dependence tracking. Hybrid tracking (HT) hybridizes pessimistic and optimistic tracking by overcoming a fundamental mismatch between these two kinds of tracking. HT uses an adaptive, profile-based policy to make runtime decisions about switching between pessimistic and optimistic tracking. Relaxed tracking (RT) attempts to reduce optimistic tracking’s overhead on conflicting accesses by tracking dependencies in a “relaxed” way—meaning that not all dependencies are tracked accurately—while still preserving both program semantics and runtime support’s correctness. To demonstrate the usefulness and potential of HT and RT, we build runtime support based on the two approaches. Our evaluation shows that both approaches offer performance advantages over existing approaches, but there exist challenges and opportunities for further improvement. HT and RT are distinct solutions to the same problem. It is easier to build runtime support based on HT than on RT, although RT does not incur the overhead of online profiling. This article presents the two approaches together to inform and inspire future designs for efficient parallel runtime support.


international conference on systems | 2015

Efficient support for strong semantics in transactional and non-transactional programs

Aritra Sengupta

Transactional programs, using transactional memory (TM) and non-transactional programs (non-TM) (e.g., using locks) provide weak semantics under commonly used memory models. Strong memory models incur high implementation overhead and yet prove to be insufficient. TM programs and non-TM programs have different semantics based on the memory model. Adding new atomic blocks to lock-based code is difficult without adding high overhead or introducing weak semantics. A system where users can add atomic blocks or lock-based critical sections seamlessly to existing TM programs or lock-based code facilitates incremental deployment. A unified and strong memory model enforced efficiently by a single runtime for both kinds of programs is therefore desirable.


spatial statistics | 2013

Hierarchical statistical modeling of big spatial datasets using the exponential family of distributions

Aritra Sengupta; Noel A Cressie

Collaboration


Dive into the Aritra Sengupta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Man Cao

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noel A Cressie

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian H. Kahn

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chris E. Jordan

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Christian E. Torgersen

United States Geological Survey

View shared research outputs
Researchain Logo
Decentralizing Knowledge