Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank K. H. A. Dehne is active.

Publication


Featured researches published by Frank K. H. A. Dehne.


symposium on computational geometry | 1993

Scalable parallel geometric algorithms for coarse grained multicomputers

Frank K. H. A. Dehne; Andreas Fabri; Andrew Rau-Chaplin

Whereas most of the literature assumes that the number of processors p is a function of the problem size n, in scalable algorithms p becomes a parameter of the time complexity. This is a more realistic modelisation of real parallel machines and yields optimal algorithms, for the case that n H p, where H is a function depending on the architecture of the interconnexion network. In this paper we present scalable algorithms for a number of geometric problems, namely lower envelope of line segments, 2D-nearest neighbour, 3D-maxima, 2D-weighted dominance counting area of the union of rectangles, 2D-convex hull. The main idea of these algorithms is to decompose the problem in p subproblems of size 0(F(n;p) + f(p)), with f(p) 2 F(n;p) , which can be solved independently using optimal sequential algorithms. For each problem we present a spatial decomposition scheme based on some geometric observations. The decomposition schemes have in common that they can be computed by globally sorting the entire data set at most twice. The data redundancy of f(p) duplicates of data elements per processor does not increase the asymptotic time complexity and ranges for the algorithms presented in this paper, from p to p2. The algorithms do not depend on a specific architecture,they are easy to implement and in practice efficient as experiments show.


BMC Bioinformatics | 2006

PIPE: a protein-protein interaction prediction engine based on the re-occurring short polypeptide sequences between known interacting protein pairs

Sylvain Pitre; Frank K. H. A. Dehne; Albert Chan; James Cheetham; Alex Duong; Andrew Emili; Marinella Gebbia; Jack Greenblatt; Matthew Jessulat; Nevan J. Krogan; Xuemei Luo; Ashkan Golshani

BackgroundIdentification of protein interaction networks has received considerable attention in the post-genomic era. The currently available biochemical approaches used to detect protein-protein interactions are all time and labour intensive. Consequently there is a growing need for the development of computational tools that are capable of effectively identifying such interactions.ResultsHere we explain the development and implementation of a novel Protein-Protein Interaction Prediction Engine termed PIPE. This tool is capable of predicting protein-protein interactions for any target pair of the yeast Saccharomyces cerevisiae proteins from their primary structure and without the need for any additional information or predictions about the proteins. PIPE showed a sensitivity of 61% for detecting any yeast protein interaction with 89% specificity and an overall accuracy of 75%. This rate of success is comparable to those associated with the most commonly used biochemical techniques. Using PIPE, we identified a novel interaction between YGL227W (vid30) and YMR135C (gid8) yeast proteins. This lead us to the identification of a novel yeast complex that here we term vid30 complex (vid30c). The observed interaction was confirmed by tandem affinity purification (TAP tag), verifying the ability of PIPE to predict novel protein-protein interactions. We then used PIPE analysis to investigate the internal architecture of vid30c. It appeared from PIPE analysis that vid30c may consist of a core and a secondary component. Generation of yeast gene deletion strains combined with TAP tagging analysis indicated that the deletion of a member of the core component interfered with the formation of vid30c, however, deletion of a member of the secondary component had little effect (if any) on the formation of vid30c. Also, PIPE can be used to analyse yeast proteins for which TAP tagging fails, thereby allowing us to predict protein interactions that are not included in genome-wide yeast TAP tagging projects.ConclusionPIPE analysis can predict yeast protein-protein interactions. Also, PIPE analysis can be used to study the internal architecture of yeast protein complexes. The data also suggests that a finite set of short polypeptide signals seem to be responsible for the majority of the yeast protein-protein interactions.


International Journal of Computational Geometry and Applications | 1996

SCALABLE PARALLEL COMPUTATIONAL GEOMETRY FOR COARSE GRAINED MULTICOMPUTERS

Frank K. H. A. Dehne; Andreas Fabri; Andrew Rau-Chaplin

We study scalable parallel computational geometry algorithms for the coarse grained multicomputer model: p processors solving a problem on n data items, were each processor has O(n/p)≫O(1) local memory and all processors are connected via some arbitrary interconnection network (e.g. mesh, hypercube, fat tree). We present O(Tsequential/p+Ts(n, p)) time scalable parallel algorithms for several computational geometry problems. Ts(n, p) refers to the time of a global sort operation. Our results are independent of the multicomputer’s interconnection network. Their time complexities become optimal when Tsequential/p dominates Ts(n, p) or when Ts(n, p) is optimal. This is the case for several standard architectures, including meshes and hypercubes, and a wide range of ratios n/p that include many of the currently available machine configurations. Our methods also have some important practical advantages: For interprocessor communication, they use only a small fixed number of one global routing operation, global sort, and all other programming is in the sequential domain. Furthermore, our algorithms use only a small number of very large messages, which greatly reduces the overhead for the communication protocol between processors. (Note however, that our time complexities account for the lengths of messages.) Experiments show that our methods are easy to implement and give good timing results.


Journal of Computer and System Sciences | 2003

Solving large FPT problems on coarse-grained parallel machines

James Cheetham; Frank K. H. A. Dehne; Andrew Rau-Chaplin; Ulrike Stege; Peter J. Taillon

Fixed-parameter tractability (FPT) techniques have recently been successful in solving NP-complete problem instances of practical importance which were too large to be solved with previous methods. In this paper, we show how to enhance this approach through the addition of parallelism, thereby allowing even larger problem instances to be solved in practice. More precisely, we demonstrate the potential of parallelism when applied to the bounded-tree search phase of FPT algorithms. We apply our methodology to the k-VERTEX COVER problem which has important applications in, for example, the analysis of multiple sequence alignments for computational biochemistry. We have implemented our parallel FPT method for the k-VERTEX COVER problem using C and the MPI communication library, and tested it on a 32-node Beowulf cluster. This is the first experimental examination of parallel FPT techniques. As part of our experiments, we solved larger instances of k-VERTEX COVER than in any previously reported implementations. For example, our code can solve problem instances with k≥400 in less than 1.5 h.


international colloquium on automata languages and programming | 1997

Efficient Parallel Graph Algorithms For Coarse Grained Multicomputers and BSP

Edson Norberto Cáceres; Frank K. H. A. Dehne; Afonso Ferreira; Paola Flocchini; Ingo Rieping; Alessandro Roncato; Nicola Santoro; Siang W. Song

In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulk-synchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition or open ear decomposition, (7) 2-edge connectivity and biconnectivity (testing and component computation), and (8) cordai graph recognition (finding a perfect elimination ordering). The algorithms for Problems 1–7 require O(log p) communication rounds and linear sequential work per round. Our results for Problems 1 and 2 hold for arbitrary ratios \(\frac{n}{p}\), i.e. they are fully scalable, and for Problems 3–8 it is assumed that \(\frac{n}{p} \geqslant p^ \in ,{\mathbf{ }} \in {\mathbf{ }} > 0\), which is true for all commercially available multiprocessors. We view the algorithms presented as an important step towards the final goal of O(1) communication rounds. Note that, the number of communication rounds obtained in this paper is independent of n and grows only very slowly with respect to p. Hence, for most practical purposes, the number of communication rounds can be considered as constant. The result for Problem 1 is a considerable improvement over those previously reported. The algorithms for Problems 2–7 are the first practically relevant deterministic parallel algorithms for these problems to be used for commercially available coarse grained parallel machines.


acm symposium on parallel algorithms and architectures | 1995

A randomized parallel 3D convex hull algorithm for coarse grained multicomputers

Frank K. H. A. Dehne; Xiaotie Deng; Patrick W. Dymond; Andreas Fabri; Ashfaq A. Khokhar

We present a randomized parallel algorithm for constructing the 3D convex hull on a generic p-processor coarse grained multicomputer with arbitrary interconnection network and n/p local memory per processor, where ~ z p’+’ (for some arbitrarily small c > O). For any given set of n points in 3-space, the algorithm computes the 3D convex hull, with high probability, in 0(w) local computation time and 0(1 ) communication phases with at most 0(~) data sent/received by each processor. That is, with high probability, the algorithm computes the 3D convex hull of an arbitrary point set in time 0(* + I’~,P), where I’~,P denotes the time complexity of one communication phase. In the terminology of the BSP model, our algorithm requires, with high probability, O(1) supersteps and a synchronization period @(%). In the LogP model, the execution time of our algorithm is asymptotically optimaJ for several archit ect ures. ● This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and the ESPRIT Basic Research Actions Nr. 7141 (ALCOM II). tschool of Computer Science, Carleton Univer.9ity, Ottawa, Canada KIS 5B6. Email: dehne@scs. carlet on. ca ~J@t. of Computer Science, York University, NOrth York, Canada M3J 1P3. Email: {deng, dyrnond}@cs .yorku. ca 5 Dept. of Computer Science, Utrecht University, 3508 TB Utrecht, The Netherlands. Email: andreas@cs .ruu. nl II School Of EE and Dept. of Computer Sci., Purdue University, West Lafayette, IN 47907, USA. Email: ashf aq~cs .purdue. edu Permission to make. digitirl/llarci copies of :111or p:~rt of [his nl:llcri:ll wiLhout fee is granted provided lhat the ct]pics ;Ire II(J1 m:ldc {Jr dis~l-il,~itcd for profit or commercial advantage, the ACM copyrighl/sccvcr notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the Association for Computin: Machinery, Inc. (ACM). To copy otherwise, to repuhlish,[o post on servers or LO redistribute to lists, requires specific permission and/cor ftx, SPAA’95 Santa Bmlxm CA USA(”) 1995 ACM O-89791 -717-0/9.5/07.S3.50


Distributed and Parallel Databases | 2002

Parallelizing the Data Cube

Frank K. H. A. Dehne; Todd Eavis; Susanne E. Hambrusch; Andrew Rau-Chaplin

This paper presents a general methodology for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom-up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication overhead by partitioning the load in advance instead of computing each individual group-by in parallel. Our partitioning strategies create a small number of coarse tasks. This allows for sharing of prefixes and sort orders between different group-by computations. Our methods enable code reuse by permitting the use of existing sequential (external memory) data cube algorithms for the subcube computations on each processor. This supports the transfer of optimized sequential data cube code to a parallel setting.The bottom-up partitioning strategy balances the number of single attribute external memory sorts made by each processor. The top-down strategy partitions a weighted tree in which weights reflect algorithm specific cost measures like estimated group-by sizes. Both partitioning approaches can be implemented on any shared disk type parallel machine composed of p processors connected via an interconnection fabric and with access to a shared parallel disk array.We have implemented our parallel top-down data cube construction method in C++ with the MPI message passing library for communication and the LEDA library for the required graph algorithms. We tested our code on an eight processor cluster, using a variety of different data sets with a range of sizes, dimensions, density, and skew. Comparison tests were performed on a SunFire 6800. The tests show that our partitioning strategies generate a close to optimal load balance between processors. The actual run times observed show an optimal speedup of p.


Genome Medicine | 2009

Bridging the gap between systems biology and medicine

Gilles Clermont; Charles Auffray; Yves Moreau; David M. Rocke; Daniel Dalevi; Devdatt P. Dubhashi; Dana Marshall; Peter Raasch; Frank K. H. A. Dehne; Paolo Provero; Jesper Tegnér; Bruce J. Aronow; Michael A. Langston; Mikael Benson

Systems biology has matured considerably as a discipline over the last decade, yet some of the key challenges separating current research efforts in systems biology and clinically useful results are only now becoming apparent. As these gaps are better defined, the new discipline of systems medicine is emerging as a translational extension of systems biology. How is systems medicine defined? What are relevant ontologies for systems medicine? What are the key theoretic and methodologic challenges facing computational disease modeling? How are inaccurate and incomplete data, and uncertain biologic knowledge best synthesized in useful computational models? Does network analysis provide clinically useful insight? We discuss the outstanding difficulties in translating a rapidly growing body of data into knowledge usable at the bedside. Although core-specific challenges are best met by specialized groups, it appears fundamental that such efforts should be guided by a roadmap for systems medicine drafted by a coalition of scientists from the clinical, experimental, computational, and theoretic domains.


Evolutionary Bioinformatics | 2008

SPR Distance Computation for Unrooted Trees

Glenn Hickey; Frank K. H. A. Dehne; Andrew Rau-Chaplin; Christian Blouin

The subtree prune and regraft distance (dSPR) between phylogenetic trees is important both as a general means of comparing phylogenetic tree topologies as well as a measure of lateral gene transfer (LGT). Although there has been extensive study on the computation of dSPR and similar metrics between rooted trees, much less is known about SPR distances for unrooted trees, which often arise in practice when the root is unresolved. We show that unrooted SPR distance computation is NP-Hard and verify which techniques from related work can and cannot be applied. We then present an efficient heuristic algorithm for this problem and benchmark it on a variety of synthetic datasets. Our algorithm computes the exact SPR distance between unrooted tree, and the heuristic element is only with respect to the algorithms computation time. Our method is a heuristic version of a fixed parameter tractability (FPT) approach and our experiments indicate that the running time behaves similar to FPT algorithms. For real data sets, our algorithm was able to quickly compute dSPR for the majority of trees that were part of a study of LGT in 144 prokaryotic genomes. Our analysis of its performance, especially with respect to searching and reduction rules, is applicable to computing many related distance measures.


computing and combinatorics conference | 2005

An O (2 O(k) n 3 ) FPT algorithm for the undirected feedback vertex set problem

Frank K. H. A. Dehne; Michael R. Fellows; Michael A. Langston; Frances A. Rosamond; Kim Stevens

We describe an algorithm for the Feedback Vertex Set problem on undirected graphs, parameterized by the size k of the feedback vertex set, that runs in time O(ckn3) where c=10.567 and n is the number of vertices in the graph. The best previous algorithms were based on the method of bounded search trees, branching on short cycles. The best previous running time of an FPT algorithm for this problem, due to Raman, Saurabh and Subramanian, has a parameter function of the form 2O( klogk / loglogk). Whether an exponentially linear in k FPT algorithm for this problem is possible has been previously noted as a significant challenge. Our algorithm is based on the new FPT technique of iterative compression. Our result holds for a more general “annotated” form of the problem, where a subset of the vertices may be marked as not to belong to the feedback set. We also establish “exponential optimality” for our algorithm by proving that no FPT algorithm with a parameter function of the form O(2o(k)) is possible, unless there is an unlikely collapse of parameterized complexity classes, namely FPT =M[1].

Collaboration


Dive into the Frank K. H. A. Dehne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert Chan

Fayetteville State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edson Norberto Cáceres

Federal University of Mato Grosso do Sul

View shared research outputs
Top Co-Authors

Avatar

Siang W. Song

University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge