Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandeep N. Bhatt is active.

Publication


Featured researches published by Sandeep N. Bhatt.


IEEE Communications Magazine | 1998

Parallel simulation techniques for large-scale networks

Sandeep N. Bhatt; Richard M. Fujimoto; Andy Ogielski; Kalyan S. Perumalla

Simulation has always been an indispensable tool in the design and analysis of telecommunication networks. Due to performance limitations of the majority of simulators, usually network simulations have been done for rather small network models and for short timescales. In contrast, many difficult design problems facing todays network engineers concern the behavior of very large hierarchical multihop networks carrying millions of multiprotocol flows over long timescales. Examples include scalability and stability of routing protocols, packet losses in core routers, of long-lasting transient behavior due to observed self-similarity of traffic patterns. Simulation of such systems would greatly benefit from application of parallel computing technologies, especially now that multiprocessor workstations and servers have become commonly available. However, parallel simulation has not yet been widely embraced by the telecommunications community due to a number of difficulties. Based on our accumulated experience in parallel network simulation projects, we believe that parallel simulation technology has matured to the point that it is ready to be used in industrial practice of network simulation. This article highlights work in parallel simulations of networks and their promise.


acm symposium on parallel algorithms and architectures | 1993

An atomic model for message-passing

Pangfeng Liu; William Aiello; Sandeep N. Bhatt

This paper presents a simple atomic model of message-passing net work syst ems. Within one synchronous time step each processor can receive one atomic message, perform local computation, and send one message. When several messages are destined to the same processor then one is transmitted and the rest are blocked. Blocked messages cannot be retrieved by their sending processors; each processor must wait for its blocked message to clear before sending more messages into the network. Depending on the traffic pattern, messages can remain blocked for arbitrarily long periods. The model is conservative when compared with exisiting message-passing systems. Nonetheless, we prove linear speedup for backtrack and branchand-bound searches using simple randomized algorithms,


Journal of the ACM | 1993

Taking random walks to grow trees in hypercubes

Sandeep N. Bhatt; Jin-Yi Cai

Many parallel computations are tree structured; as the computation proceeds, new processes are recursively created while others die out. Algorithms for maintaining dynamically evolving trees on fine-grain parallel architectures must have minimal overhead and must distribute processes evenly among processorsat run-time. A simple randomized strategy for maintaining dynamically evolving binary trees on hypercube networks is presented. The algorithm is distributed and does not require any global information. The algorithm guarantees that every pair of nodes adjacent in the tree are within distance O(loglog N)in an N-processor hypercube. Furthermore, if M is the number of active nodes in the tree at any instant, then, with ovenvhelming probability, no hypercube processors assigned more than 0(1 + (MN)) active nodes. The active nodes in a tree may constitute only leaves of the tree, or all nodes. As a corollary, with high probability, the load is evenly distributed throughout a computation whose running time is polynomial in N, the number of processors. The results can be generalized to bounded-degree trees. Our techniques justify the use of simple algorithms to efficiently parallelize any tree-based computation such as divide-andconquer, backtrack, functional expression evaluation, and to efficiently maintain dynamic data structures such as quad-trees that arise in scientific applications. A novel technique—tree surge~—is introduced to deal with dependencies inherent in trees. Together with tree surgery, the study of random walks is used to analyze the algorithm.


Journal of the ACM | 1996

Optimal emulations by butterfly-like networks

Sandeep N. Bhatt; Fan R. K. Chung; Jia-Wei Hong; F. Thomson Leighton; Bojana Obrenic; Arnold L. Rosenberg; Eric J. Schwabe

The power of butterfly-like networks as multicomputer interconnection networks is studied, by considering how efficiently the butterfly can emulate other networks. Emulations are studied formally via graph embeddings, so the topic here becomes : How efficiently can one embed the graph underlying a given interconnection network in the graph underlying the butterfly network ? Within this framework, the slowdown incurred by an emulation is measured by the sum of the dilation and the congestion of the corresponding embedding (respectively, the maximum amount that the embedding stretches an edge of the guest graph, and the maximum traffic across any edge of the host graph) ; the efficiency of resource utilization in an emulation is measured by the expansion of the corresponding embedding (the ratio of the sizes of the host to guest graph). Three main results expose a number of optimal emulations by butterfly networks. Call a family of graphs balanced if complete binary trees can be embedded in the family with simultaneous dilation, congestion, and expansion 0(1). (1) The family of butterfly graphs is balanced. (2) (a) Any graph < from a family of maxdegree-d graphs having a recursive separator of size S(x) can be embedded in any balanced graph family with simultaneous dilation O(log(d Σ i S(2 -i |G|))) and expansion O(1). (b) Any dilation-D embedding of a maxdegree-d graph in a butterfly graph can be converted to an embedding having simultaneous dilation O(D) and congestion O(dD). (3) Any embedding of a planar graph G in a butterfly graph must have dilation Ω(log Σ (G)/Φ(G), where : Σ(G) is the size of the smallest (1/3, 2/3)-node-separator of G, and Φ(G) is the size of Gs largest interior face. Applications of these results include : (1) The n-node X-tree network can be emulated by the butterfly network with slowdown O(log log n) and expansion 0(1) ; no embedding has dilation smaller than Ω(log log n), independent of expansion. (2) Every embedding of the n x n mesh in the butterfly graph has dilation Ω(log n) ; any expansion-O(1) embedding in the butterfly graph achieves dilation O(log n). These applications provide the first examples of networks that can be embedded more efficiently in hypercubes than in butterflies. We also show that analogues of these results hold for networks that are structurally related to the butterfly network. The upper bounds hold for the hypercube and the de Bruijn networks, possibly with altered constants. The lower bounds hold-at least in weakened form-for the de Bruijn network.


international colloquium on automata, languages and programming | 2000

Fast Verification of Any Remote Procedure Call: Short Witness-Indistinguishable One-Round Proofs for NP

William Aiello; Sandeep N. Bhatt; Rafail Ostrovsky; Sivaramakrishnan Rajagopalan

Under a computational assumption, and assuming that both Prover and Verifier are computationally bounded, we show a one-round (i.e., Verifier speaks and then Prover answers) witness-indistinguishable interactive proof for NP with poly-logarithmic communication complexity. A major application of our main result is that we show how to check in an efficient manner and without any additional interaction the correctness of the output of any remote procedure call.


ieee symposium on security and privacy | 2014

The Operational Role of Security Information and Event Management Systems

Sandeep N. Bhatt; Pratyusa K. Manadhata; Loai Zomlot

An integral part of an enterprise computer security incident response team (CSIRT), the security operations center (SOC) is a centralized unit tasked with real-time monitoring and identification of security incidents. Security information and event management (SIEM) systems are an important tool used in SOCs; they collect security events from many diverse sources in enterprise networks, normalize the events to a common format, store the normalized events for forensic analysis, and correlate the events to identify malicious activities in real time. In this article, the authors discuss the critical role SIEM systems play SOCs, highlight the current operational challenges in effectively using SIEM systems, and describe future technical challenges that SIEM systems must overcome to remain relevant.


acm symposium on parallel algorithms and architectures | 1994

Experiences with parallel N-body simulation

Pangfeng Liu; Sandeep N. Bhatt

This paper describes our experiences developing high-performance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built and maintained across physically distributed memory; moreover, the communication requirements are irregular and adaptive. Together with the need to balance the computational work-load among processors, these issues pose interesting challenges and tradeoffs for high-performance implementation.nOur implementation was guided by the need to keep solutions simple and general. We use a technique for implicitly representing a dynamic global tree across multiple processors which substantially reduces the programming complexity as well as the performance overheads of distributed memory architectures. The contributions include methods to vectorize the computation and minimize communication time which are theoretically and experimentally justified.nThe code has been tested by varying the number and distribution of bodies on different configurations of the Connection Machine CM-5. The overall performance on instances with 10 million bodies is typically over 30% of the peak machine rate. Preliminary timings compare favorably with other approaches.


IEEE Transactions on Parallel and Distributed Systems | 2001

Augmented ring networks

William Aiello; Sandeep N. Bhatt; Fan R. K. Chung; Arnold L. Rosenberg; Ramesh K. Sitaraman

We study four augmentations of ring networks which are intended to enhance a rings efficiency as a communication medium significantly, while increasing its structural complexity only modestly. Chordal rings add shortcut edges, which can be viewed as chords, to the ring. Express rings are chordal rings whose chords are routed outside the ring. Multirings append subsidiary rings to edges of a ring and, recursively, to edges of appended subrings. Hierarchical ring networks (HRNs) append subsidiary rings to nodes of a ring and, recursively, to nodes of appended subrings. We show that these four modes of augmentation are very closely related: 1) Planar chordal rings, planar express rings, and multirings are topologically equivalent families of networks with the cutwidth of an express ring translating into the tree depth of its isomorphic multiring and vice versa. 2) Every depth-d HRN is a spanning subgraph of a depth-(2d-1) multiring. 3) Every depth-d multiring /spl Mscr/ can be embedded into a d-dimensional mesh with dilation 3 in such a way that some node of /spl Mscr/ resides at a corner of the mesh. 4) Every depth-d HRN /spl Hscr/ can be embedded into a d-dimensional mesh with dilation 2 in such a way that some node of /spl Hscr/ resides at a corner of the mesh. In addition to demonstrating that these four augmented ring networks are grid graphs, our embedding results afford us close bounds on how much decrease in diameter is achievable for a given increase in structural complexity for the networks. Specifically, we derive upper and lower bounds on the optimal diameters of N-node depth-d multirings and HRNs that are asymptotically tight for large N and d.


Journal of Parallel and Distributed Computing | 1996

Scheduling Tree-Dags Using FIFO Queues

Sandeep N. Bhatt; Fan R. K. Chung; F.Thomas Leighton; Arnold L. Rosenberg

We study a combinatorial problem that is motivated by “client?server” schedulers for parallel computations. Such schedulers are often used, for instance, when computations are being done by a cooperating network of workstations. Our results expose and quantify a control?memory trade-off for such schedulers, when the computation being scheduled has the structure of a binary tree, with all arcs oriented either root-toward-leaves or leaves-toward-root. The combinatorial problem for the root-toward-leaves case takes the following form. (The leaves-toward-root case gives rise to a dual formulation, which yields the same trade-offs.) Consider, for integersk,N> 0, an algorithm that employskFIFO queues in order to schedule anN-leaf binary tree in such a way that each nonleaf node of the tree is executed before its children. We establish a trade-off between the number of queues used by the algorithm?which we view as measuring thecontrol complexityof the algorithm?and thememory requirementsof the algorithm, as embodied in the required capacity of the largest-capacity queue. Specifically, for each integerk? {1, 2, ..., log2N}, letQk(N) denote the minimax per-queue capacity for ak-queue algorithm that schedules allN-leaf binary trees; letQ*k(N) denote the analogous quantity forcompletebinary trees. We establish the following bounds: For generalN-leaf binary trees, for allk,formula]?Qk(N) ? 2N1/k+ 1.For complete binary trees, we derive tighter bounds. We prove that for all constantk,Q*k(N) = ?formula].For generalk, we obtain the following bounds:formula]?Q*k(N) ? (4k)1?1/kformula].Similar trade-offs are readily established for trees of any fixed branching factor.


Theoretical Computer Science | 2008

Area-time tradeoffs for universal VLSI circuits

Sandeep N. Bhatt; Gianfranco Bilardi; Geppino Pucci

An area-universal VLSI circuit can be programmed to emulate every circuit of a given area, but at the cost of lower area-time performance. In particular, if a circuit with area-time bounds (A,T) is emulated by a universal circuit with bounds (Au,Tu), we say that the universal circuit has blowup Au/A and slowdown Tu/T. A central question in VLSI theory is to investigate the inherent costs and tradeoffs of universal circuit designs. Prior to this work, universal designs were known for area-A circuits with O(1) blowup and O(logA) slowdown. Universal designs for the family of area-A circuits containing O(A^1^+^@elogA) vertices, with O(A^@e) blowup and O(loglogA) slowdown had also been developed. However, the existence of universal circuits with O(1) slowdown and relatively small blowup was an open question. In this paper, we settle this question by designing an area-universal circuit UA^^^@e with O(1/@e) slowdown and O(A^@e) blowup, for any value of the parameter @e, with 4loglogA/logA@?@e@?1. By varying @e, we obtain universal circuits which operate at different points in the spectrum of the slowdown-blowup tradeoff. In particular, when @e is chosen to be a constant, our universal circuit yields O(1) slowdown.

Collaboration


Dive into the Sandeep N. Bhatt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnold L. Rosenberg

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalyan S. Perumalla

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Pangfeng Liu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge