Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Prabhakar Ragde is active.

Publication


Featured researches published by Prabhakar Ragde.


Algorithmica | 2008

Faster Fixed-Parameter Tractable Algorithms for Matching and Packing Problems

Michael R. Fellows; Christian Knauer; Naomi Nishimura; Prabhakar Ragde; Frances A. Rosamond; Ulrike Stege; Dimitrios M. Thilikos; Sue Whitesides

Abstract We obtain faster algorithms for problems such as r-dimensional matching and r-set packing when the size k of the solution is considered a parameter. We first establish a general framework for finding and exploiting small problem kernels (of size polynomial in k). This technique lets us combine Alon, Yuster and Zwick’s color-coding technique with dynamic programming to obtain faster fixed-parameter algorithms for these problems. Our algorithms run in time O(n+2O(k)), an improvement over previous algorithms for some of these problems running in time O(n+kO(k)). The flexibility of our approach allows tuning of algorithms to obtain smaller constants in the exponent.


european symposium on algorithms | 2001

On the Parameterized Complexity of Layered Graph Drawing

Vida Dujmović; Michael R. Fellows; Michael Hallett; Matthew Kitching; Giuseppe Liotta; Catherine McCartin; Naomi Nishimura; Prabhakar Ragde; Frances A. Rosamond; Matthew Suderman; Sue Whitesides; David R. Wood

Abstract We consider graph drawings in which vertices are assigned to layers and edges are drawn as straight line-segments between vertices on adjacent layers. We prove that graphs admitting crossing-free h-layer drawings (for fixed h) have bounded pathwidth. We then use a path decomposition as the basis for a linear-time algorithm to decide if a graph has a crossing-free h-layer drawing (for fixed h). This algorithm is extended to solve related problems, including allowing at most k crossings, or removing at most r edges to leave a crossing-free drawing (for fixed k or r). If the number of crossings or deleted edges is a non-fixed parameter then these problems are NP-complete. For each setting, we can also permit downward drawings of directed graphs and drawings in which edges may span multiple layers, in which case either the total span or the maximum span of edges can be minimized. In contrast to the Sugiyama method for layered graph drawing, our algorithms do not assume a preassignment of the vertices to layers.


Journal of Algorithms | 1996

Parallel Algorithms with Processor Failures and Delays

Jonathan F. Buss; Paris C. Kanellakis; Prabhakar Ragde; Alexander A. Shvartsman

We study efficient deterministic parallel algorithms on two models: restartable fail-stop CRCW PRAMs and asynchronous PRAMs. In the first model, synchronous processes are subject to arbitrary stop failures and restarts determined by an on-line adversary and involving loss of private but not shared memory; the complexity measures arecompleted work(where processors are charged for completed fixed-size update cycles) andoverhead ratio(completed work amortized over necessary work and failures). In the second model, the result of the computation is a serialization of the actions of the processors determined by an on-line adversary; the complexity measure istotal work(number of steps taken by all processors). Despite their differences, the two models share key algorithmic techniques. We present new algorithms for theWrite-Allproblem (in whichPprocessors write ones into an array of sizeN) for the two models. These algorithms can be used to implement a simulation strategy for anyNprocessor PRAM on a restartable fail-stopPprocessor CRCW PRAM such that it guarantees a terminating execution of each simulatedNprocessor step, withO(log2N) overhead ratio, andO(min{N+Plog2N+MlogN,N·P0.59}) (subquadratic) completed work (whereMis the number of failures during this steps simulation). This strategy has a range of optimality. We also show that theWrite-AllrequiresN+?(PlogP) completed/total work on these models forP?N.


Journal of Computer and System Sciences | 2004

Approximation algorithms for classes of graphs excluding single-crossing graphs as minors

Erik D. Demaine; Mohammad Taghi Hajiaghayi; Naomi Nishimura; Prabhakar Ragde; Dimitrios M. Thilikos

Many problems that are intractable for general graphs allow polynomial-time solutions for structured classes of graphs, such as planar graphs and graphs of bounded treewidth. In this paper, we demonstrate structural properties of larger classes of graphs and show how to exploit the properties to obtain algorithms. The classes considered are those formed by excluding as a minor a graph that can be embedded in the plane with at most one crossing. We show that graphs in these classes can be decomposed into planar graphs and graphs of small treewidth; we use the decomposition to show that all such graphs have locally bounded treewidth (all subgraphs of a certain form are graphs of bounded treewidth). Finally, we make use of the structural properties to derive polynomial-time algorithms for approximating treewidth within a factor of 1.5 and branchwidth within a factor of 2.25 as well as polynomial-time approximation schemes for both minimization and maximization problems and fixed-parameter algorithms for problems such as vertex cover, edge-dominating set, feedback vertex set, and others.


symposium on the theory of computing | 1985

One, two, three . . . infinity: lower bounds for parallel computation

Faith E. Fich; F. Meyer auf der Heide; Prabhakar Ragde; Avi Wigderson

In this paper we compare the power of the two most commonly used concurrent-write models of parallel computation, the COMMON PRAM and the PRIORITY PRAM. These models differ in the way they resolve write conflicts. If several processors want to write into the same shared memory cell at the same time, in the COMMON model they have to write the same value. In the PRIORITY model, they may attempt to write different values; the processor with smallest index succeeds. We consider PRAMs with <italic>n</italic> processors, each having arbitrary computational power. We provide the first separation results between these two models in two extreme cases: when the size <italic>m</italic> of the shared memory is small (<italic>m</italic> ≤ <italic>n</italic><supscrpt>ε</supscrpt>, ε < 1), and when it is infinite. In the case of small memory, the PRIORITY model can be faster than the COMMON model by a factor of &THgr;(log <italic>n</italic>), and this lower bound holds even if the COMMON model is probabilistic. In the case of infinite memory, the gap between the models can be a factor of &OHgr;(log log log <italic>n</italic>). We develop new proof techniques to obtain these results. The technique used for the second lower bound is strong enough to establish the first tight time bounds for the PRIORITY model, which is the strongest parallel computation model. We show that finding the maximum of <italic>n</italic> numbers requires &THgr;(log log <italic>n</italic>) steps, generalizing a result of Valiant for parallel computation trees.


graph drawing | 2001

A Fixed-Parameter Approach to Two-Layer Planarization

Vida Dujmović; Michael R. Fellows; Michael Hallett; Matthew Kitching; Giuseppe Liotta; Catherine McCartin; Naomi Nishimura; Prabhakar Ragde; Frances A. Rosamond; Matthew Suderman; Sue Whitesides; David R. Wood

Abstract A bipartite graph is biplanar if the vertices can be placed on two parallel lines (layers) in the plane such that there are no edge crossings when edges are drawn as line segments between the layers. In this paper we study the 2-Layer Planarization problem: Can k edges be deleted from a given graph G so that the remaining graph is biplanar? This problem is NP-complete, and remains so if the permutation of the vertices in one layer is fixed (the 1-Layer Planarization problem). We prove that these problems are fixed-parameter tractable by giving linear-time algorithms for their solution (for fixed k). In particular, we solve the 2-Layer Planarization problem in O(k · 6k + |G|) time and the 1-Layer Planarization problem in O(3k · |G|) time. We also show that there are polynomial-time constant-approximation algorithms for both problems.


Algorithmica | 1988

Simulations among concurrent-write PRAMs

Faith E. Fich; Prabhakar Ragde; Avi Wigderson

This paper is concerned with the relative power of the two most popular concurrent-write models of parallel computation, the PRIORITY PRAM [G], and the COMMON PRAM [K]. Improving the trivial and seemingly optimalO(logn) simulation, we show that one step of a PRIORITY machine can be simulated byO(logn/(log logn)) steps of a COMMON machine with the same number of processors (and more memory). We further prove that this is optimal, if processor communication is restricted in a natural way.


Algorithmica | 1989

A bidirectional shortest-path algorithm with good average-case behavior

Michael Luby; Prabhakar Ragde

The two-terminal shortest-path problem asks for the shortest directed path from a specified nodes to a specified noded in a complete directed graphG onn nodes, where each edge has a nonnegative length. We show that if the length of each edge is chosen independently from the exponential distribution, and adjacency lists at each node are sorted by length, then a priority-queue implementation of Dijkstras unidirectional search algorithm has the expected running time Θ(n logn). We present a bidirectional search algorithm that has expected running time Θ(√n logn). These results are generalized to apply to a wide class of edge-length distributions, and to sparse graphs. If adjacency lists are not sorted, bidirectional search has the expected running time Θ(a√n) on graphs of average degreea, as compared with Θ(an) for unidirectional search.


principles of distributed computing | 1984

Relations between concurrent-write models of parallel computation

Faith E. Fich; Prabhakar Ragde; Avi Wigderson

Shared-memory models for parallel computation (e.g. parallel RAMs) are very natural and already widely used for parallel algorithm design. The various models differ from each other mainly in the way they restrict simultaneous processor access to a shared memory cell. Understanding the relative power of these models is important for understanding the power of parallel computation. Two recent pioneering works shed some light in this question. Cook and Dwork [CD] (resp. Snir [S]) present problems that, for instances of size n, can be solved in O(1) time on an n-processor PRAM that allows simultaneous write (resp. read) access to shared memory, but require Ω(log n) time on a PRAM that forbids simultaneous write (resp. read) access, regardless of the number of processors. When allowing simultaneous write access, the model must include a write-conflict resolution scheme. Three such schemes were suggested in the literature, and in this paper we study their relative power. Here the situation is more sensitive, as a small increase in the number of processors allows constant time simulation of the strongest by the weakest. By fixing the number of processors and parametrizing the number of shared memory cells, we obtain tight separation results between the models, thereby partially answering open questions of Vishkin [V]. New lower bounds techniques are developed for this purpose.


Acta Informatica | 2007

Solving #SAT using vertex covers

Naomi Nishimura; Prabhakar Ragde; Stefan Szeider

We propose an exact algorithm for counting the models of propositional formulas in conjunctive normal form. Our algorithm is based on the detection of strong backdoor sets of bounded size; each instantiation of the variables of a strong backdoor set puts the given formula into a class of formulas for which models can be counted in polynomial time. For the backdoor set detection we utilize an efficient vertex cover algorithm applied to a certain “obstruction graph” that we associate with the given formula. This approach gives rise to a new hardness index for formulas, the clustering-width. Our algorithm runs in uniform polynomial time on formulas with bounded clustering-width. It is known that the number of models of formulas with bounded clique-width, bounded treewidth, or bounded branchwidth can be computed in polynomial time; these graph parameters are applied to formulas via certain (hyper)graphs associated with formulas. We show that clustering-width and the other parameters mentioned are incomparable: there are formulas with bounded clustering-width and arbitrarily large clique-width, treewidth, and branchwidth. Conversely, there are formulas with arbitrarily large clustering-width and bounded clique-width, treewidth, and branchwidth.

Collaboration


Dive into the Prabhakar Ragde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitrios M. Thilikos

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Wigderson

Institute for Advanced Study

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mirosław Kutyłowski

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge