Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faisal N. Abu-Khzam is active.

Publication


Featured researches published by Faisal N. Abu-Khzam.


Journal of Computer and System Sciences | 2010

A kernelization algorithm for d-Hitting Set

Faisal N. Abu-Khzam

For a given parameterized problem, @p, a kernelization algorithm is a polynomial-time pre-processing procedure that transforms an arbitrary instance of @p into an equivalent one whose size depends only on the input parameter(s). The resulting instance is called a problem kernel. In this paper, a kernelization algorithm for the 3-Hitting Set problem is presented along with a general kernelization for d-Hitting Set. For 3-Hitting Set, an arbitrary instance is reduced into an equivalent one that contains at most 5k^2+k elements. This kernelization is an improvement over previously known methods that guarantee cubic-order kernels. Our method is used also to obtain quadratic kernels for several other problems. For a constant d>=3, a kernelization of d-Hitting Set is achieved by a non-trivial generalization of the 3-Hitting Set method, and guarantees a kernel whose order does not exceed (2d-1)k^d^-^1+k.


Algorithmica | 2006

Scalable Parallel Algorithms for FPT Problems

Faisal N. Abu-Khzam; Michael A. Langston; Pushkar Shanbhag; Christopher T. Symons

Algorithmic methods based on the theory of fixed-parameter tractability are combined with powerful computational platforms to launch systematic attacks on combinatorial problems of significance. As a case study, optimal solutions to very large instances of the NP-hard vertex cover problem are computed. To accomplish this, an efficient sequential algorithm and various forms of parallel algorithms are devised, implemented, and compared. The importance of maintaining a balanced decomposition of the search space is shown to be critical to achieving scalability. Target problems need only be amenable to reduction and decomposition. Applications in high throughput computational biology are also discussed.


Theory of Computing Systems \/ Mathematical Systems Theory | 2007

Crown Structures for Vertex Cover Kernelization

Faisal N. Abu-Khzam; Michael R. Fellows; Michael A. Langston; W. Henry Suters

Crown structures in a graph are defined and shown to be useful in kernelization algorithms for the classic vertex cover problem. Two vertex cover kernelization methods are discussed. One, based on linear programming, has been in prior use and is known to produce predictable results, although it was not previously associated with crowns. The second, based on crown structures, is newer and much faster, but produces somewhat variable results. These two methods are studied and compared both theoretically and experimentally with each other and with older, more primitive kernelization algorithms. Properties of crowns and methods for identifying them are discussed. Logical connections between linear programming and crown reductions are established. It is shown that the problem of finding an induced crown-free subgraph, and the problem of finding a crown of maximum size in an arbitrary graph, are solvable in polynomial time.


conference on high performance computing (supercomputing) | 2005

Genome-Scale Computational Approaches to Memory-Intensive Applications in Systems Biology

Yun Zhang; Faisal N. Abu-Khzam; Nicole Baldwin; Elissa J. Chesler; Michael A. Langston; Nagiza F. Samatova

Graph-theoretical approaches to biological network analysis have proven to be effective for small networks but are computationally infeasible for comprehensive genome-scale systems-level elucidation of these networks. The difficulty lies in the NP-hard nature of many global systems biology problems that, in practice, translates to exponential (or worse) run times for finding exact optimal solutions. Moreover, these problems, especially those of an enumerative flavor, are often memory-intensive and must share very large sets of data effectively across many processors. For example, the enumeration of maximal cliques - a core component in gene expression networks analysis, cis regulatory motif finding, and the study of quantitative trait loci for high-throughput molecular phenotypes can result in as many as 3^n/3 maximal cliques for a graph with n vertices. Memory requirements to store those cliques reach terabyte scales even on modest-sized genomes. Emerging hardware architectures with ultra-large globally addressable memory such as the SGI Altix and Cray X1 seem to be well suited for addressing these types of data-intensive problems in systems biology. This paper presents a novel framework that provides exact, parallel and scalable solutions to various graph-theoretical approaches to genome-scale elucidation of biological networks. This framework takes advantage of these large-memory architectures by creating globally addressable bitmap memory indices with potentially high compression rates, fast bitwise-logical operations, and reduced search space. Augmented with recent theoretical advancements based on fixed-parameter tractability, this framework produces computationally feasible performance for genome-scale combinatorial problems of systems biology.


workshop on algorithms and data structures | 2007

Kernelization algorithms for d-hitting set problems

Faisal N. Abu-Khzam

A kernelization algorithm for the 3-Hitting-Set problem is presented along with a general kernelization for d-Hitting-Set problems. For 3-Hitting-Set, a quadratic kernel is obtained by exploring properties of yes instances and employing what is known as crown reduction. Any 3-Hitting-Set instance is reduced into an equivalent instance that contains at most 5k2 + k elements (or vertices). This kernelization is an improvement over previously known methods that guarantee cubic-size kernels. Our method is used also to obtain a quadratic kernel for the Triangle Vertex Deletion problem. For a constant d ≥ 3, a kernelization of d-Hitting-Set is achieved by a generalization of the 3-Hitting-Set method, and guarantees a kernel whose order does not exceed (2d - 1)kd-1 + k.


IWPEC'06 Proceedings of the Second international conference on Parameterized and Exact Computation | 2006

Kernels: annotated, proper and induced

Faisal N. Abu-Khzam; Henning Fernau

The notion of a problem kernel plays a central role in the design of fixed-parameter algorithms. The FPT literature is rich in kernelization algorithms that exhibit fundamentally different approaches. We highlight these differences and discuss several generalizations and restrictions of the standard notion.


computing and combinatorics conference | 2003

Graph coloring and the immersion order

Faisal N. Abu-Khzam; Michael A. Langston

The relationship between graph coloring and the immersion order is considered. Vertex connectivity, edge connectivity and related issues are explored. These lead to the conjecture that, if G requires at least t colors, then G must have immersed within it Kt, the complete graph on t vertices. Evidence in support of such a proposition is presented. For each fixed value of t, there can be only a finite number of minimal counterexamples. These counterexamples are characterized based on Kempe chains, connectivity, cutsets and degree bounds. It is proved that minimal counterexamples must, if any exist, be both 4-vertex-connected and t-edge-connected.


acs/ieee international conference on computer systems and applications | 2007

The Maximum Common Subgraph Problem: Faster Solutions via Vertex Cover

Faisal N. Abu-Khzam; Nagiza F. Samatova; Mohamad A. Rizk; Michael A. Langston

In the maximum common subgraph (MCS) problem, we are given a pair of graphs and asked to find the largest induced subgraph common to them both. With its plethora of applications, MCS is a familiar and challenging problem. Many algorithms exist that can deliver optimal MCS solutions, but whose asymptotic worst-case run times fail to do better than mere brute-force, which is exponential in the order of the smaller graph. In this paper, we present a faster solution to MCS. We transform an essential part of the search process into the task of enumerating maximal independent sets in only a part of only one of the input graphs. This is made possible by exploiting an efficient decomposition of a graph into a minimum vertex cover and the maximum independent set in its complement. The result is an algorithm whose run time is bounded by a function exponential in the order of the smaller cover rather than in the order of the smaller graph.


computing and combinatorics conference | 2005

A New Approach and Faster Exact Methods for the Maximum Common Subgraph Problem

W. Henry Suters; Faisal N. Abu-Khzam; Yun Zhang; Christopher T. Symons; Nagiza F. Samatova; Michael A. Langston

The Maximum Common Subgraph (MCS) problem appears in many guises and in a wide variety of applications. The usual goal is to take as inputs two graphs, of order m and n, respectively, and find the largest induced subgraph contained in both of them. MCS is frequently solved by reduction to the problem of finding a maximum clique in the order mn association graph, which is a particular form of product graph built from the inputs. In this paper a new algorithm, termed “clique branching,” is proposed that exploits a special structure inherent in the association graph. This structure contains a large number of naturally-ordered cliques that are present in the association graph’s complement. A detailed analysis shows that the proposed algorithm requires O((m+1)n) time, which is a superior worst-case bound to those known for previously-analyzed algorithms in the setting of the MCS problem.


Information Processing Letters | 2010

An improved kernelization algorithm for r-Set Packing

Faisal N. Abu-Khzam

We present a reduction procedure that takes an arbitrary instance of the r-Set Packing problem and produces an equivalent instance whose number of elements is in O(k^r^-^1), where k is the input parameter. Such parameterized reductions are known as kernelization algorithms, and a reduced instance is called a problem kernel. Our result improves on previously known kernelizations by a factor of k. In particular, the number of elements in a 3-Set Packing kernel is improved from a cubic function of the parameter to a quadratic one.

Collaboration


Dive into the Faisal N. Abu-Khzam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristina Bazgan

Paris Dauphine University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Egan

Charles Darwin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge