Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lenore M. R. Mullin is active.

Publication


Featured researches published by Lenore M. R. Mullin.


Archive | 1991

A Comparison of Array Theory and a Mathematics of Arrays

Michael A. Jenkins; Lenore M. R. Mullin

Array-based programming began with APL. Two mathematical treatments of array computations have evolved from the data concepts of APL. The first, More’s array theory, extends APL concepts to include nested arrays and systematic treatment of second order functions. More recently, Mullin has developed a mathematical treatment of flat arrays that is much closer to the original APL concepts. The two approaches are compared and evaluated.


ACM Transactions on Programming Languages and Systems | 2006

On minimizing materializations of array-valued temporaries

Daniel J. Rosenkrantz; Lenore M. R. Mullin; Harry B. Hunt

We consider the analysis and optimization of code utilizing operations and functions operating on entire arrays. Models are developed for studying the minimization of the number of materializations of array-valued temporaries in basic blocks, each consisting of a sequence of assignment statements involving array-valued variables. We derive lower bounds on the number of materializations required, and develop several algorithms minimizing the number of materializations, subject to a simple constraint on allowable statement rearrangement. In contrast, we also show that when statement rearrangement is unconstrained, minimizing the number of materializations becomes NP-complete, even for very simple basic blocks.


Concurrency and Computation: Practice and Experience | 1996

Effective data parallel computation using the Psi calculus

Lenore M. R. Mullin; Michael A. Jenkins

SUMMARY Large scale scientific computing necessitates finding a way to match the high level understanding of how a prohlem can he solved with the details of its computation in a processing environment organized as networks of processors. Effective utilization of parallel architectures can then he achieved hy using formal methods to descrihe hoth computations and computational organizations within these networks. By returning to the mathematical treatment of a prohlem as a high level numerical algorithm we can express it as an algorithmic formalism that captures the inherent parallelism of the computation. We then give a meta description of an architecture followed by the use of transformational techniques to convert the high level description into a program that utilizes the architecture effectively. The hope is that one formalism can he used to descrihe hoth computations as well as architectures and that a methodology for automatically transforming computations can he developed. The formalism and methodology presented in the paper is a first step toward the amhitious goals descrihed ahove. It uses a theory of arrays, the Psi calculus, as the formalism, and two levels of conversions — one for simplification and another for data mapping.


Journal of Mathematical Modelling and Algorithms | 2002

Four Easy Ways to a Faster FFT

Lenore M. R. Mullin; Sharon G. Small

The Fast Fourier Transform (FFT) was named one of the Top Ten algorithms of the 20th century , and continues to be a focus of current research. A problem with currently used FFT packages is that they require large, finely tuned, machine specific libraries, produced by highly skilled software developers. Therefore, these packages fail to perform well across a variety of architectures. Furthermore, many need to run repeated experiments in order to ‘re-program’ their code to its optimal performance based on a given machines underlying hardware. Finally, it is difficult to know which radix to use given a particular vector size and machine configuration. We propose the use of monolithic array analysis as a way to remove the constraints imposed on performance by a machines underlying hardware, by pre-optimizing array access patterns. In doing this we arrive at a single optimized program. We have achieved up to a 99.6% increase in performance, and the ability to run vectors up to 8 388 608 elements larger, on our experimental platforms. Preliminary experiments indicate different radices perform better relative to a machines underlying architecture.


languages and compilers for parallel computing | 2000

On Materializations of Array-Valued Temporaries

Daniel J. Rosenkrantz; Lenore M. R. Mullin; Harry B. Hunt

We present results demonstrating the usefulness of monolithic program analysis and optimization prior to scalarization. In particular, models are developed for studying nonmaterialization in basic blocks consisting ofa sequence of assignment statements involving array-valued variables. We use these models to analyze the problem ofmi nimizing the number ofmat erializations in a basic block, and to develop an efficient algorithm for minimizing the number of materializations in certain cases.


network and parallel computing | 2009

Search Space Reduction Technique for Distributed Multiple Sequence Alignment

Manal Helal; Lenore M. R. Mullin; John Potter; Vitali Sintchenko

To take advantage of the various High Performance Computer (HPC) architectures for multithreaded and distributed computing, this paper parallelizes the dynamic programming algorithm for Multiple Sequence Alignment (MSA). A novel definition of a hyper-diagonal through a tensor space is used to reduce the search space. Experiments demonstrate that scoring less than 1% of the search space produces the same optimal results as scoring the full search space. The alignment scores are often better than other heuristic methods and are capable of aligning more divergent sequences.


international symposium on parallel and distributed processing and applications | 2008

Parallelizing Optimal Multiple Sequence Alignment by Dynamic Programming

Manal Helal; Hossam A. ElGindy; Lenore M. R. Mullin; Bruno A. Gaëta

Optimal multiple sequence alignment by dynamic programming, like many highly dimensional scientific computing problems, has failed to benefit from the improvements in computing performance brought about by multi-processor systems, due to the lack of suitable scheme to manage partitioning and dependencies. A scheme for parallel implementation of the dynamic programming multiple sequence alignment is presented, based on a peer to peer design and a multidimensional array indexing method. This design results in up to 5-fold improvement compared to a previously described master/slave design, and scales favourably with the number of processors used. This study demonstrates an approach for parallelising multi-dimensional dynamic programming and similar algorithms utilizing multi-processor architectures.


Archive | 2009

Future Directions in Tensor-Based Computation and Modeling

Evrim Acar; Robert J. Harrison; Frank Olken; Orly Alter; Manal Helal; Larsson Omberg; Brett W. Bader; Anthony Kennedy; Zhaojun Bai; Dongmin Kim; Robert J. Plemmons; Gregory Beylkin; Tamara G. Kolda; Stefan Ragnarsson; Lieven DeLathauwer; Julien Langou; Sri Priya Ponnapalli; Inderjit S. Dhillon; Lek-Heng Lim; J. Ram Ramanujam; Chris Ding; Michael W. Mahoney; James E. Raynolds; Carla D. Moravitz Martin; Phillip Regalia; Petros Drineas; Martin J. Mohlenkamp; Saday Sadayappan; Christos Faloutsos; Jason Morton


arXiv: Mathematical Software | 2008

Conformal Computing: Algebraically connecting the hardware/software boundary using a uniform approach to high-performance computation for software and hardware applications

Lenore M. R. Mullin; James E. Raynolds


arXiv: Software Engineering | 2008

A Transformation--Based Approach for the Design of Parallel/Distributed Scientific Software: the FFT

Harry B. Hunt; Lenore M. R. Mullin; Daniel J. Rosenkrantz; James E. Raynolds

Collaboration


Dive into the Lenore M. R. Mullin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Raynolds

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Bruno A. Gaëta

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Hossam A. ElGindy

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evrim Acar

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

John Potter

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Brett W. Bader

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge