Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Coyle is active.

Publication


Featured researches published by James Coyle.


Concurrency and Computation: Practice and Experience | 2003

MPI-CHECK: a tool for checking Fortran 90 MPI programs

Glenn R. Luecke; Hua Chen; James Coyle; Jim Hoekstra; Marina Kraeva; Yan Zou

MPI is commonly used to write parallel programs for distributed memory parallel computers. MPI‐CHECK is a tool developed to aid in the debugging of MPI programs that are written in free or fixed format Fortran 90 and Fortran 77. MPI‐CHECK provides automatic compile‐time and run‐time checking of MPI programs. MPI‐CHECK automatically detects the following problems in the use of MPI routines: (i) mismatch in argument type, kind, rank or number; (ii) messages which exceed the bounds of the source/destination array; (iii) negative message lengths; (iv) illegal MPI calls before MPI_INIT or after MPI_FINALIZE; (v) inconsistencies between the declared type of a message and its associated DATATYPE argument; and (vi) actual arguments which violate the INTENT attribute. Copyright


Concurrency and Computation: Practice and Experience | 2002

Deadlock detection in MPI programs

Glenn R. Luecke; Yan Zou; James Coyle; Jim Hoekstra; Marina Kraeva

The Message‐Passing Interface (MPI) is commonly used to write parallel programs for distributed memory parallel computers. MPI‐CHECK is a tool developed to aid in the debugging of MPI programs that are written in free or fixed format Fortran 90 and Fortran 77. This paper presents the methods used in MPI‐CHECK 2.0 to detect many situations where actual and potential deadlocks occur when using blocking and non‐blocking point‐to‐point routines as well as when using collective routines. Copyright


Concurrency and Computation: Practice and Experience | 2006

A survey of systems for detecting serial run-time errors

Glenn R. Luecke; James Coyle; Jim Hoekstra; Marina Kraeva; Ying Li; Olga Taborskaia; Yanmei Wang

This paper evaluates the ability of a variety of commercial and non‐commercial software products to detect serial run‐time errors in C and C++ programs, to issue meaningful messages, and to give the line in the source code where the error occurred. The commercial products Insure++ and Purify performed the best of all the software products we evaluated. Error messages were usually better and clearer when using Insure++ than when using Purify. Our evaluation shows that the overall capability of detecting run‐time errors of non‐commercial products is significantly lower than the quality of both Purify and Insure++. Of all non‐commercial products evaluated, Mpatrol provided the best overall capability to detect run‐time errors in C and C++ programs. Copyright


ICWC 99. IEEE Computer Society International Workshop on Cluster Computing | 1999

Comparing the communication performance and scalability of a Linux and a NT cluster of PCs, a Cray origin 2000, an IBM SP and a Cray T3E-600

Glenn R. Luecke; Bruno Raffin; James Coyle

The paper presents scalability and communication performance results for a cluster of PCs running Linux with the GM communication library, a cluster of PCs running Windows NT with the HPVM communication library, a Cray T3E-600, an IBM SP and a Cray Origin 2000. Both PC clusters were using a Myrinet network. Six communication tests using MPI routines were run for a variety of message sizes and numbers of processors. The tests were chosen to represent commonly used communication patterns with low contention (a ping-pong between processors, a right shift, a binary tree broadcast and a synchronization barrier) to communication patterns with high contention (a naive broadcast and an all-to-all). For most of the tests, the T3E provides the best performance and scalability. For an 8 byte message the NT cluster performs about the same as the T3E for most of the tests. For all the tests but one, the T3E, the Origin and the SP outperform the two clusters for the largest message size (10 Kbytes or 1 Mbyte).


Parallel Tools Workshop | 2010

The Importance of Run-Time Error Detection

Glenn R. Luecke; James Coyle; James Hoekstra; Marina Kraeva; Ying Xu; Mi-Young Park; Elizabeth Kleiman; Olga Weiss; Andre Wehe; Melissa Yahya

The ability of system software to detect and issue error messages that help programmers quickly fix serial and parallel run-time errors is an important productivity criterion for developing and maintaining application programs. Over ten thousand run-time error tests and a run-time error detection (RTED) evaluation tool has been developed for the automatic evaluation of run-time error detection capabilities for serial errors and for parallel errors in MPI, OpenMP and UPC programs. Evaluation results, tests and the RTED evaluation tool are freely available at http://rted.public.iastate.edu. Many compilers, tools and run-time systems scored poorly on these tests. The authors make recommendations for providing better RTED in the future.


Computer Science - Research and Development | 2013

UPC-CHECK: a scalable tool for detecting run-time errors in Unified Parallel C

James Coyle; Indranil Roy; Marina Kraeva; Glenn R. Luecke

Unified Parallel C (UPC) is a language used to write parallel programs for distributed memory parallel computers. UPC-CHECK (http://hpcgroup.public.iastate.edu/UPC-CHECK/) is a scalable tool developed to automatically detect argument errors in UPC functions and deadlocks in UPC programs at run-time and issue high quality error messages to help programmers quickly fix those errors. The run-time complexity of all detection techniques used are optimal, i.e. O(1) except for deadlocks involving locks where it is theoretically known to be linear in the number of threads. The tool is easy to use, and involves merely replacing the compiler command with upc-check. Error messages issued by UPC-CHECK were evaluated using the UPC RTED test suite for argument errors in UPC functions and deadlocks. Results of these tests show that the error messages issued by UPC-CHECK for these tests are excellent.


Proceedings of the Third Conference on Partitioned Global Address Space Programing Models | 2009

Evaluating error detection capabilities of UPC run-time systems

Glenn R. Luecke; James Coyle; James Hoekstra; Marina Kraeva; Ying Xu; Elizabeth Kleiman; Olga Weiss

The ability of system software to detect run-time errors and issue messages that help programmers quickly fix these errors is an important productivity criterion for developing and maintaining application programs. To evaluate this capability for Unified Parallel C (UPC), over two thousand run-time error tests and a run-time error detection (RTED) evaluation tool have been developed. For each error message issued, the RTED evaluation tool assigns a score from 0 to 5 based on the usefulness of the information in the message to help a programmer quickly fix the error. The RTED evaluation tool calculates averages over each error category and then prints the results. All tests and the RTED evaluation tool are freely available at the RTED web site http://rted.public.iastate.edu/UPC. The Cray, Berkeley, HP and GNU UPC compilers have been evaluated and results posted on this same web site.


Software - Practice and Experience | 1991

Evaluation of Fortran vector compilers and preprocessors

Glenn R. Luecke; Wagar Haque; Jim Hoekstra; Howard W. Jespersen; James Coyle

Many scientific codes can achieve significant performance improvement when executed on a computer equipped with a vector processor. Vector constructs in source code should be recognized by a vectorizing compiler or preprocessor. This paper discusses, from a general point of view, how a vectorizing compiler/preprocessor can be evaluated. The areas discussed include data dependence analysis, IF loop analysis, nested loops, loop interchanging, loop collapsing, indirect addressing, use of temporary storage, and order of arithmetic. The ideas presented are based on vectorization of over a million lines of production codes and an extensive test suite developed to evaluate preprocessors under varying degrees of code complexity. Areas for future research are also discussed.


Archive | 1999

Comparing the Scalability of the Cray T3E-600 and the Cray Origin 2000 Using SHMEM Routines

Glenn R. Luecke; Bruno Raffin; James Coyle


Archive | 1999

The Performance of the MPI Collective Communication Routines for Large Messages on the Cray T3E-600,

Glenn R. Luecke; Bruno Raffin; James Coyle

Collaboration


Dive into the James Coyle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Zou

Iowa State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ying Li

Iowa State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge