Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan Wagner is active.

Publication


Featured researches published by Alan Wagner.


international conference on management of data | 2001

Iceberg-cube computation with PC clusters

Raymond T. Ng; Alan Wagner; Yu Yin

In this paper, we investigate the approach of using low cost PC cluster to parallelize the computation of iceberg-cube queries. We concentrate on techniques directed towards online querying of large, high-dimensional datasets where it is assumed that the total cube has net been precomputed. The algorithmic space we explore considers trade-offs between parallelism, computation and I/0. Our main contribution is the development and a comprehensive evaluation of various novel, parallel algorithms. Specifically: (1) Algorithm RP is a straightforward parallel version of BUC [BR99]; (2) Algorithm BPP attempts to reduce I/0 by outputting results in a more efficient way; (3) Algorithm ASL, which maintains cells in a cuboid in a skiplist, is designed to put the utmost priority on load balancing; and (4) alternatively, Algorithm PT load-balances by using binary partitioning to divide the cube lattice as evenly as possible. We present a thorough performance evaluation on all these algorithms on a variety of parameters, including the dimensionality of the cube, the sparseness of the cube, the selectivity of the constraints, the number of processors, and the size of the dataset. A key finding is that it is not a one-algorithm-fit-all situation. We recommend a “recipe” which uses PT as the default algorithm, but may also deploy ASL under specific circumstances.


SIAM Journal on Computing | 1990

Embedding trees in a hypercube is NP-complete

Alan Wagner; Derek G. Corneil

An important family of graphs is the n-dimensional hypercube, the graph with


conference on high performance computing (supercomputing) | 2005

SCTP versus TCP for MPI

Humaira Kamal; Brad Penoff; Alan Wagner

2^{n}


IEEE Transactions on Parallel and Distributed Systems | 1997

Performance models for the processor farm paradigm

Alan Wagner; Halsur V. Sreekantaswamy; Samuel T. Chanson

nodes labelled


Journal of Parallel and Distributed Computing | 1989

Embedding arbitrary binary trees in a hypercube

Alan Wagner

0,1,\cdots, 2^{n}-1


ieee international symposium on parallel distributed processing workshops and phd forum | 2010

FG-MPI: Fine-grain MPI for multicore and clusters

Humaira Kamal; Alan Wagner

, and an edge joining two nodes whenever their binary representation differs in a single coordinate. The problem of deciding if a given source graph is a partial subgraph of an n-dimensional cube has recently been shown to be NP-complete. In this paper the same problem on a very restricted family of source graphs, trees, is considered. It is shown that the problem of determining for a given tree T and integer k if T is a partial subgraph of a k-dimensional cube is NP-complete.


international conference on parallel and distributed systems | 2009

MPI-NeTSim: A Network Simulation Module for MPI

Brad Penoff; Alan Wagner; Michael Tüxen; Irene Rüngeler

SCTP (Stream Control Transmission Protocol) is a recently standardized transport level protocol with several features that better support the communication requirements of parallel applications; these features are not present in traditional TCP (Transmission Control Protocol). These features make SCTP a good candidate as a transport level protocol for MPI (Message Passing Interface). MPI is a message passing middleware that is widely used to parallelize scientific and compute intensive applications. TCP is often used as the transport protocol for MPI in both local area and wide area networks. Prior to this work, SCTP has not been used for MPI. We compared and evaluated the benefits of using SCTP instead of TCP as the underlying transport protocol for MPI. We re-designed LAM-MPI, a public domain version of MPI, to use SCTP.We describe the advantages and disadvantages of using SCTP, the necessary modifications to the MPI middleware to use SCTP, and the performance of SCTP as compared to the stock implementation that uses TCP.


Journal of Parallel and Distributed Computing | 1993

Embedding All Binary Trees in the Hypercube

Alan Wagner

In this paper, we describe the design, implementation, and modeling of a runtime kernel to support the processor farm paradigm on multicomputers. We present a general topology-independent framework for obtaining performance models to predict the performance of the start-up, steady-state, and wind-down phases of a processor farm. An algorithm is described, which for any interconnection network determines a tree-structured subnetwork that optimizes farm performance. The analysis technique is applied to the important case of k-ary tree topologies. The models are compared with the measured performance on a variety of topologies using both constant and varied task sizes.


high performance distributed computing | 2010

Scalability of communicators and groups in MPI

Humaira Kamal; Seyed M. Mirtaheri; Alan Wagner

An important issue in the design of algorithms for a hypercube multiprocessor is to minimize communication overhead by matching the computational structure of the algorithm to the underlying physical structure of the machine. One computational structure of particular interest is binary trees. Although in general any tree can be embedded in a sufficiently large hypercube in practice it is important to minimize the number of dimensions needed by the embedding. The algorithm presented in this paper embeds an N node binary tree into a hypercube with expansion O(log N) and unit dilation.


international parallel and distributed processing symposium | 2007

A Parallel Workflow for Real-time Correlation and Clustering of High-Frequency Stock Market Data

Camilo Rostoker; Alan Wagner; Holger H. Hoos

MPI (Message Passing Interface) has been successfully used in the high performance computing community for years and is the dominant programming model. Current implementations of MPI are coarse-grained, with a single MPI process per processor, however, there is nothing in the MPI specification precluding a finer-grain interpretation of the standard. We have implemented Fine-grain MPI (FG-MPI), a system that allows execution of hundreds and thousands of MPI processes on-chip or communicating between chips inside a cluster.

Collaboration


Dive into the Alan Wagner's collaboration.

Top Co-Authors

Avatar

Humaira Kamal

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Camilo Rostoker

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mike Tsai

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David Feldcamp

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Tüxen

Münster University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Chamath Keppitiyagama

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge