Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abdullah N. Arslan is active.

Publication


Featured researches published by Abdullah N. Arslan.


Bioinformatics | 2001

A new approach to sequence comparison: normalized sequence alignment.

Abdullah N. Arslan; Ömer Eğecioğlu; Pavel A. Pevzner

The Smith-Waterman algorithm for local sequence alignment is one of the most important techniques in computational molecular biology. This ingenious dynamic programming approach was designed to reveal the highly conserved fragments by discarding poorly conserved initial and terminal segments. However, the existing notion of local similarity has a serious flaw: it does not discard poorly conserved intermediate segments. The Smith-Waterman algorittim finds the local alignment with maximal score but it is unable to find local alignment with maximum degree of similarity (e.g. maximal percent of matches). Moreover, there is still no efficient algorithm that answers the following natural question: do two sequences share a (sufficiently long) fragment with more than 70% of similarity? As a result, the local alignment sometimes produces a mosaic of well-conserved fragments artificially connected by poorly-conserved or even unrelated fragments. This may lead to problems in comparison of long genomic sequences and comparative gene prediction as recently pointed out by Zhang et al. (Bioinformatics, 15, 1012-1019, 1999). In this paper we propose a new sequence comparison algorithm (normalized local alignment) that reports the regions with maximum degree of similarity. The algorithm is based on fractional programming and its running time is O(n 2 logn). In practice, normalized local alignment is only 3-5 times slower than the standard Smith-Waterman algorithm.


International Journal of Foundations of Computer Science | 2005

ALGORITHMS FOR THE CONSTRAINED LONGEST COMMON SUBSEQUENCE PROBLEMS

Abdullah N. Arslan; Ömer Eğecioğlu

Given strings S1, S2, and P, the constrained longest common subsequence problem for S1 and S2 with respect to P is to find a longest common subsequence lcs of S1 and S2 which contains P as a subsequence. We present an algorithm which improves the time complexity of the problem from the previously known O(rn2m2) to O(rnm) where r, n, and m are the lengths of P, S1, and S2, respectively. As a generalization of this, we extend the definition of the problem so that the lcs sought contains a subsequence whose edit distance from P is less than a given parameter d. For the latter problem, we propose an algorithm whose time complexity is O(drnm).


Knowledge and Information Systems | 2006

Efficient string matching with wildcards and length constraints

Gong Chen; Xindong Wu; Xingquan Zhu; Abdullah N. Arslan; Yu He

This paper defines a challenging problem of pattern matching between a pattern P and a text T, with wildcards and length constraints, and designs an efficient algorithm to return each pattern occurrence in an online manner. In this pattern matching problem, the user can specify the constraints on the number of wildcards between each two consecutive letters of P and the constraints on the length of each matching substring in T. We design a complete algorithm, SAIL that returns each matching substring of P in T as soon as it appears in T in an O(n+klmg) time with an O(lm) space overhead, where n is the length of T, k is the frequency of Ps last letter occurring in T, l is the user-specified maximum length for each matching substring, m is the length of P, and g is the maximum difference between the user-specified maximum and minimum numbers of wildcards allowed between two consecutive letters in P.


Computers in Biology and Medicine | 2013

PMBC: Pattern mining from biological sequences with wildcard constraints

Xindong Wu; Xingquan Zhu; Yu He; Abdullah N. Arslan

Patterns/subsequences frequently appearing in sequences provide essential knowledge for domain experts, such as molecular biologists, to discover rules or patterns hidden behind the data. Due to the inherent complex nature of the biological data, patterns rarely exactly reproduce and repeat themselves, but rather appear with a slightly different form in each of its appearances. A gap constraint (In this paper, a gap constraint (also referred to as a wildcard) is a character that can be substituted for any character predefined in an alphabet.) provides flexibility for users to capture useful patterns even if their appearances vary in the sequences. In order to find patterns, existing tools require users to explicitly specify gap constraints beforehand. In reality, it is often nontrivial or time-consuming for users to provide proper gap constraint values. In addition, a change made to the gap values may give completely different results, and require a separate time-consuming re-mining procedure. Therefore, it is desirable to automatically and efficiently find patterns without involving user-specified gap requirements. In this paper, we study the problem of frequent pattern mining without user-specified gap constraints and propose PMBC (namely P̲atternM̲ining from B̲iological sequences with wildcard C onstraints) to solve the problem. Given a sequence and a support threshold value (i.e. pattern frequency threshold), PMBC intends to discover all subsequences with their support values equal to or greater than the given threshold value. The frequent subsequences then form patterns later on. Two heuristic methods (one-way vs. two-way scans) are proposed to discover frequent subsequences and estimate their frequency in the sequences. Experimental results on both synthetic and real-world DNA sequences demonstrate the performance of both methods for frequent pattern mining and pattern frequency estimation.


string processing and information retrieval | 1999

An efficient uniform-cost normalized edit distance algorithm

Abdullah N. Arslan; Ömer Eğecioğlu

A common model for computing the similarity of two strings X and Y of lengths m, and n respectively with m/spl ges/n, is to transform X into Y through a sequence of three types of edit operations: insertion, deletion, and substitution. The model assumes a given cost function which assigns a non-negative real weight to each edit operation. The amortized weight for a given edit sequence is the ratio of its weight to its length, and the minimum of this ratio over all edit sequences is the normalized edit distance. Existing algorithms for normalized edit distance computation with proven complexity bounds require O(mn/sup 2/) time in the worst-case. We give an O(mn log n)-time algorithm for the problem when the cost function is uniform, i.e., the weight of each edit operation is constant within the same type, except substitutions can have different weights depending on whether they are matching or non-matching.


research in computational molecular biology | 2001

A new approach to sequence comparison: normalized sequence alignment

Abdullah N. Arslan; Ömer Eğecioğlu; Pavel A. Pevzner

The Smith-Waterman algorithm for local sequence alignment is one of the most important techniques in computational molecular biology. This ingenious dynamic programming approach was designed to reveal the highly conserved fragments by discarding poorly conserved initial and terminal segments. However, the existing notion of local similarity has a serious flaw: it does not discard poorly conserved intermediate segments. The Smith-Waterman algorithm finds the local alignment with maximal score but it is unable to find local alignment with maximum degree of similarity (e.g., maximal percent of matches). Moreover, there is still no efficient algorithm that answers the following natural question: do two sequences share a (sufficiently long) fragment with more than 70% of similarity? As a result, the local alignment sometimes produces a mosaic of well-conserved fragments artificially connected by poorly-conserved or even unrelated fragments. This may lead to problems in comparison of long genomic sequences and comparative gene prediction as recently pointed out by Zhang et al., 1999 [33]. In this paper we propose a new sequence comparison algorithm (normalized local alignment) that reports the regions with maximum degree of similarity. The algorithm is based on fractional programming and its running time is &Ogr;(n2 log n). In practice, normalized local alignment is only 3-5 times slower than the standard Smith-Waterman algorithm.


computational intelligence in bioinformatics and computational biology | 2005

Multiple Sequence Alignment Containing a Sequence of Regular Expressions

Abdullah N. Arslan

A classical algorithm for the pairwise sequence alignment is the Smith Waterman algorithm which uses dynamic programming. The algorithm computes the maximum score of alignments that use insertions, deletions, and substitutions, with no consideration given in composition of the alignments. However, biologists favor applying their knowledge about common structures or functions into the alignment process. For alignment of protein sequences, several methods have been suggested for taking into account the motifs (a restricted regular expression) from the PROSITE database to guide alignments. One method modifies the Smith Waterman dynamic programming solution to reward alignments that contain matching motifs. Another method introduces the regular expression constrained sequence alignment problem in which pairwise alignments are constrained to contain a given regular expression. This latter method constructs a weighted finite automaton from a given regular expression, and presents a dynamic programming solution that simulates copies of this automaton in seeking an alignment with maximum score containing the regular expression. We generalize this approach: 1) We introduce a variation of the problem for multiple sequences, namely the regular expression constrained multiple sequence alignment, and present an algorithm for it; 2) We develop an algorithm for the case of the problem when the alignments sought are required to contain a given sequence of regular expressions.


bioinformatics and bioengineering | 2005

A parallel algorithm for the constrained multiple sequence alignment problem

Dan He; Abdullah N. Arslan

We propose a parallel algorithm for the constrained multiple sequence alignment (CMSA) problem that seeks an optimal multiple alignment constrained to include a given pattern. We consider the dynamic programming computations in layers indexed by the symbols of the given pattern. In each layer we compute as a potential part of an optimal alignment for the CMSA problem, shortest paths for multiple sources and multiple destinations. These shortest paths problems are independent from one another (which enables parallel execution), and each can be solved using an A* algorithm specialized for the shortest paths problem for multiple sources and multiple destinations. The final step of our algorithm solves a single source single destination shortest path problem. Our experiments on real sequences show that our algorithm is faster in general than the existing sequential dynamic programming solutions.


information reuse and integration | 2007

Mining Frequent Patterns with Wildcards from Biological Sequences

Yu He; Xindong Wu; Xingquan Zhu; Abdullah N. Arslan

Frequent pattern mining from sequences is a crucial step for many domain experts, such as molecular biologists, to discover rules or patterns hidden in their data. In order to find specific patterns, many existing tools require users to specify gap constraints beforehand. In reality, it is often nontrivial to let a user provide such gap constraints. In addition, a change made to the gap values may give completely different results, and require a separate time-consuming re-mining procedure. Consequently it is desirable to develop an algorithm to automatically and efficiently find patterns without user-specified gap constraints. In this paper, a frequent pattern mining problem without user-specified gap constraints is presented and studied. Given a sequence and a support threshold value, all subsequences whose support is not less than the given threshold value will be discovered. These frequent subsequences then form patterns later on. Two heuristic methods (one-way vs two-way scan) are proposed to mine frequent subsequences and estimate the maximum support for both artificial and real world data. Given a specific pattern, the simulated results demonstrate that the one-way scan heuristic performs better in the sense of estimating the maximum support with more than ninety percent accuracy.


computing and combinatorics conference | 2002

Dictionary Look-Up within Small Edit Distance

Abdullah N. Arslan; Ömer Eğecioğlu

Let W be a dictionary consisting of n binary strings of length m each, represented as a trie. The usual d-query asks if there exists a string in W within Hamming distance d of a given binary query string q. We present an algorithm to determine if there is a member in W within edit distance d of a given query string q of length m. The method takes time O(dmd+1) in the RAM model, independent of n, and requires O(dm) additional space.

Collaboration


Dive into the Abdullah N. Arslan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xindong Wu

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar

Yu He

University of Vermont

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xingquan Zhu

Florida Atlantic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge