Paul D. Coddington
Syracuse University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul D. Coddington.
International Journal of Modern Physics C | 1994
Paul D. Coddington
Monte Carlo simulation is one of the main applications involving the use of random number generators. It is also one of the best methods of testing the randomness properties of such generators, by comparing results of simulations using different generators with each other, or with analytic results. Here we compare the performance of some popular random number generators by high precision Monte Carlo simulation of the 2-d Ising model, for which exact results are known, using the Metropolis, Swendsen-Wang, and Wolff Monte Carlo algorithms. Many widely used generators that perform well in standard statistical tests are shown to fail these Monte Carlo tests.
Nuclear Physics | 1993
Mark J. Bowick; Paul D. Coddington; Leping Han; Geoffrey Harris; Enzo Marinari
Abstract We present the results of a large-scale simulation of dynamically triangulated random surfaces with extrinsic curvature embedded in three-dimensional flat space. We measure a variety of local observables and use a finite-size scaling analysis to characterize as much as possible the regime of crossover from crumpled to smooth surfaces.
International Journal of Modern Physics C | 1996
Paul D. Coddington
Large-scale Monte Carlo simulations require high-quality random number generators to ensure correct results. The contrapositive of this statement is also true — the quality of random number generators can be tested by using them in large-scale Monte Carlo simulations. We have tested many commonly-used random number generators with high precision Monte Carlo simulations of the 2-d Ising model using the Metropolis, Swendsen-Wang, and Wolff algorithms. This work is being extended to the testing of random number generators for parallel computers. The results of these tests are presented, along with recommendations for random number generators for high-performance computers, particularly for lattice Monte Carlo simulations.
Physics Letters B | 1993
Konstantinos N. Anagnostopoulos; Mark J. Bowick; Paul D. Coddington; Marco Falcioni; Leping Han; Geoffrey Harris; Enzo Marinari
Abstract We present the results of an extension of our previous work on large-scale simulations of dynamically triangulated toroidal random surfaces embedded in R 3 with extrinsic curvature. We find that the extrinsic-curvature specific heat peak ceases to grow on lattices with more than 576 nodes and that the location of the peak λ c also stabilizes. The evidence for a true crumpling transition is still weak. If we assume it exists we can say that the finite-size scaling exponent α / ηd is very close to zero or negative. On the other hand our new data does rule out the observed peak as being a finite-size artifact of the persistence length becoming comparable to the extent of the lattice.
International Journal of Modern Physics C | 1993
John Apostolakis; Paul D. Coddington; Enzo Marinari
Cluster algorithms are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models of magnets. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two-dimensional Ising spin model. These algorithms could also be applied to other problems which use connected component labeling, such as percolation and image analysis.
arXiv: High Energy Physics - Lattice | 1993
Mark J. Bowick; Paul D. Coddington; Leping Han; Geoffrey Harris; Enzo Marinari
Abstract We present the results of a set of Monte Carlo simulations of Dynamically Triangulated Random Surfaces embedded in three dimensions with an extrinsic curvature dependent action. We analyze several observables in the crossover regime and discuss whether or not our observations are indicative of the presence of a phase transition.
EPL | 1992
J. Apostolakis; Paul D. Coddington; Enzo Marinari
We introduce a simple multi-scale algorithm for connected component labeling on parallel computers, which we apply to the problem of labeling clusters in spin and percolation models. We give numerical evidence that it is only logarithmically slowed down in the critical limit. We also discuss, in light of the proposed Teraflop computers optimized for lattice gauge theories and other lattice problems, the minimum requirements for computer switchboard architectures for which one can efficiently implement such multi-scale algorithms.
conference on high performance computing (supercomputing) | 1995
Kim Mills; Geoffrey C. Fox; Paul D. Coddington; Barbara Mihalas; Marek Podgorny; Barbara Shelly; Steven Bossert
The Living Textbook creates a unique learning environment enabling teachers and students to use educational resources on multimedia information servers, supercomputers, parallel databases, and network testbeds. We have three innovative educational software applications running in our laboratory, and under test in the classroom. Our education-focused goal is to learn how new, learner-driven, explorative models of learning can be supported by these high bandwidth, interactive applications and ultimately how they will impact the classroom of the future.
Archive | 2000
Geoffrey C. Fox; Paul D. Coddington
We present an overview of the state of the art and future trends in high performance parallel and distributed computing, and discuss techniques for using such computers in the simulation of complex problems in computational science. The use of high performance parallel computers can help improve our understanding of complex systems, and the converse is also true — we can apply techniques used for the study of complex systems to improve our understanding of parallel computing. We consider parallel computing as the mapping of one complex system — typically a model of the world — into another complex system — the parallel computer. We study static, dynamic, spatial and temporal properties of both the complex systems and the map between them. The result is a better understanding of which computer architectures are good for which problems, and of software structure, automatic partitioning of data, and the performance of parallel machines.
high performance distributed computing | 1993
Paul D. Coddington
The author has implemented a set of computational physics codes on a network of IBM RS/6000 workstations used as a distributed parallel computer. He compares the performance of the codes on this network, using both standard Ethernet connections and a fast prototype switch, and also on the nCUBE/2, a MIMD parallel computer. The algorithms used range from simple, local, and regular to complex, non-local, and irregular. He describes his experiences with the hardware, software and parallel languages used, and discusses ideas for making distributed parallel computing on workstation networks more easily usable for computational physicists.<<ETX>>