Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark T. Jones is active.

Publication


Featured researches published by Mark T. Jones.


4. international meshing roundtable, Albuquerque, NM (United States), 16-17 Oct 1995 | 1995

An efficient parallel algorithm for mesh smoothing

L. Freitag; Paul E. Plassmann; Mark T. Jones

Automatic mesh generation and adaptive refinement methods have proven to be very successful tools for the efficient solution of complex finite element applications. A problem with these methods is that they can produce poorly shaped elements; such elements are undesirable because they introduce numerical difficulties in the solution process. However, the shape of the elements can be improved through the determination of new geometric locations for mesh vertices by using a mesh smoothing algorithm. In this paper the authors present a new parallel algorithm for mesh smoothing that has a fast parallel runtime both in theory and in practice. The authors present an efficient implementation of the algorithm that uses non-smooth optimization techniques to find the new location of each vertex. Finally, they present experimental results obtained on the IBM SP system demonstrating the efficiency of this approach.


SIAM Journal on Matrix Analysis and Applications | 1992

Two-stage and multisplitting methods for the parallel solution of linear systems

Daniel B. Szyld; Mark T. Jones

Two-stage and multisplitting methods for the parallel solution of linear systems are studied. A two-stage multisplitting method is presented that reduces to each of the others in particular cases. Conditions for its convergence are given. In the particular case of a multisplitting method related to block Jacobi, it is shown that it is equivalent to a two-stage method with only one inner iteration per outer iteration. A fixed number of iterations of this method, say, p, is compared with a two-stage method with p inner iterations. The asymptotic rate of convergence of the first method is faster, but, depending on the structure of the matrix and the parallel architecture, it takes more time to converge. This is illustrated with numerical experiments.


SIAM Journal on Scientific Computing | 1997

Parallel Algorithms for Adaptive Mesh Refinement

Mark T. Jones; Paul E. Plassmann

Computational methods based on the use of adaptively constructed nonuniform meshes reduce the amount of computation and storage necessary to perform many scientific calculations. The adaptive construction of such nonuniform meshes is an important part of these methods. In this paper, we present a parallel algorithm for adaptive mesh refinement that is suitable for implementation on distributed-memory parallel computers. Experimental results obtained on the Intel DELTA are presented to demonstrate that for scientific computations involving the finite element method, the algorithm exhibits scalable performance and has a small run time in comparison with other aspects of the scientific computations examined. It is also shown that the algorithm has a fast expected running time under the parallel random access machine (PRAM) computation model.


Finite Elements in Analysis and Design | 1997

Adaptive refinement of unstructured finite-element meshes

Mark T. Jones; Paul E. Plassmann

Abstract The finite-element method used in conjunction with adaptive mesh refinement algorithms can be an efficient tool in many scientific and engineering applications. In this paper we review algorithms for the adaptive refinement of unstructured simplicial meshes (triangulations and tetra hedralizations). We discuss bounds on the quality of the meshes resulting from these refinement algorithms. Unrefinement and refinement along curved surfaces are also discussed. Finally, we give an overview of recent developments in parallel refinement algorithms.


Computing Systems in Engineering | 1994

Computational Results for Parallel Unstructured Mesh Computations

Mark T. Jones; Paul E. Plassman

The majority of finite element models in structural engineering are composed of unstructured meshes. These unstructured meshes are often very large and require significant computational resources; hence they are excellent candidates for massively parallel computation. Parallel solution of the sparse matrices that arise from such meshes has been studied heavily, and many good algorithms have been developed. Unfortunately, many of the other aspects of parallel unstructured mesh computation have gone largely ignored. The authors present a set of algorithms that allow the entire unstructured mesh computation process to execute in parallel -- including adaptive mesh refinement, equation reordering, mesh partitioning, and sparse linear system solution. They briefly describe these algorithms and state results regarding their running-time and performance. They then give results from the 512-processor Intel DELTA for a large-scale structural analysis problem. These results demonstrate that the new algorithms are scalable and efficient. The algorithms are able to achieve up to 2.2 gigaflops for this unstructured mesh problem.


ieee international conference on high performance computing data and analytics | 1994

Parallel algorithms for the adaptive refinement and partitioning of unstructured meshes

Mark T. Jones; Paul E. Plassmann

The efficient solution of many large-scale scientific calculations depends on adaptive mesh strategies. We present new parallel algorithms to solve two significant problems that arise in this context: the generation of the adaptive mesh and the mesh partitioning. The crux of our refinement algorithm is the identification of independent sets of elements that can be refined in parallel. The objective of our partitioning heuristic is to construct partitions with good aspect ratios. We present run-time bounds and computational results obtained on the Intel DELTA for these algorithms. These results demonstrate that the algorithms exhibit scalable performance and have run-times small in comparison with other aspects of the computation.<<ETX>>


Journal of Parallel and Distributed Computing | 1996

Parallel Heuristics for Improved, Balanced Graph Colorings

Robert K. Gjertsen; Mark T. Jones; Paul E. Plassmann

The computation of good, balanced graph colorings is an essential part of many algorithms required in scientific and engineering applications. Motivated by an effective sequential heuristic, we introduce a new parallel heuristic, PLF, and show that this heuristic has the same expected runtime under the PRAM computational model as the scalable coloring heuristic introduced by Jones and Plassmann. We present experimental results performed on the Intel DELTA that demonstrate that this new heuristic consistently generates better colorings and requires only slightly more time than the JP heuristic. In the second part of the paper we introduce two new parallel color-balancing heuristics, PDR(k) and PLF(k). We show that these heuristics have the desirable property that they do not increase the number of colors used by an initial coloring during the balancing process. We present experimental results that show that these heuristics are very effective in obtaining balanced colorings and, in addition, exhibit scalable performance.


Archive | 1993

The Efficient Parallel Iterative Solution of Large Sparse Linear Systems

Mark T. Jones; Paul E. Plassmann

The development of efficient, general-purpose software for the iterative solution of sparse linear systems on parallel MIMD computers depends on recent results from a wide variety of research areas. Parallel graph heuristics, convergence analysis, and basic linear algebra implementation issues must all be considered.


SIAM Journal on Matrix Analysis and Applications | 1993

Bunch-Kaufman factorization for real symmetric indefinite banded matrices

Mark T. Jones; Merrell L. Patrick

The Bunch–Kaufman algorithm for factoring symmetric indefinite matrices has been rejected for banded matrices because it destroys the banded structure of the matrix. Herein, it is shown that for a subclass of real symmetric matrices which arise in solving the generalized eigenvalue problem using the Lanczos method, the Bunch–Kaufman algorithm does not result in major destruction of the bandwidth. Space/time complexities of the algorithm are given and used to show that the Bunch–Kaufman algorithm is a significant improvement over banded LU factorization. Timing comparisons are used to show the advantage held by the authors’ implementation of Bunch–Kaufman over the implementation of the multifrontal algorithm for indefinite factorization in MA27 when factoring this subclass of matrices.


SIAM Journal on Matrix Analysis and Applications | 1994

Factoring Symmetric Indefinite Matrices on High-Performance Architectures

Mark T. Jones; Merrell L. Patrick

The Bunch-Kaufman algorithm is the method of choice for factoring symmetric indefinite matrices in many applications. However, the Bunch-Kaufman algorithm uses matrix- vector operations and, therefore, may not take full advantage of high-performance architectures with a memory hierarchy. It is possible to modify the Bunch-Kaufman algorithm so that it uses rank-

Collaboration


Dive into the Mark T. Jones's collaboration.

Top Co-Authors

Avatar

Paul E. Plassmann

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lori A. Freitag

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul E. Plassman

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge