Ralph G. Brickner
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ralph G. Brickner.
Physical Review D | 1991
Rajan Gupta; Clive F. Baillie; Ralph G. Brickner; Gregory W. Kilcup; Apoorva Patel; Stephen R. Sharpe
We present results for the QCD spectrum and the matrix elements of scalar and axial-vector densities at β=6/g2=5.4, 5.5, 5.6. The lattice update was done using the hybrid Monte Carlo algorithm to include two flavors of dynamical Wilson fermions. We have explored quark masses in the range ms≤mq≤3ms. The results for the spectrum are similar to quenched simulations and mass ratios are consistent with phenomenological heavy-quark models. The results for matrix elements of the scalar density show that the contribution of sea quarks is comparable to that of the valence quarks. This has important implications for the pion-nucleon σ term.
parallel computing | 1992
Robert Hiromoto; B.R. Wienke; Ralph G. Brickner
Abstract We present a summary of numerical experiments that explore the effects of asynchronous (chaotic) iteration schemes for solutions of the Boltzmann transport equation. Our experiments are performed on both common and distributed memory parallel processing systems. Two chaotic and one deterministic schemes are developed directly from a computational algorithm known as discrete ordinates that uses iterative techniques in solving the linearized Boltzmann particle transport equation. From an analysis based on the performance of these schemes on various parallel architectures, a third chaotic scheme is developed that executes faster in either parallel or sequential modes. The behavior of these methods, both deterministic and chaotic, will be examined for the Denelcor HEP, the Encore Multimax, and the Intel iPSC hypercube.
Scientific Programming | 1994
William George; Ralph G. Brickner; S. Lennart Johnsson
We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3-4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientific Software Library (CMSSL).
Applied Numerical Mathematics | 1993
Burton Wendroff; Tony T. Warnock; Lewis Stiller; Dean Mayer; Ralph G. Brickner
Abstract An endgame database for chess encodes optimal lines of play for a specific endgame involving a small number of pieces. The computation of such a database is feasible for as many as six pieces provided the inherent parallelism in the problem is fully exploited. Two computer architectures which can do this are the SIMD CM-2 and the vector multiprocessor YMP. For each machine the computer programs operate on sets of chess positions, each position represented by a single bit. The high-level algorithm is an iterative backtracking procedure. After describing endgame databases and classifying the complexity of endgames we outline the algorithms and give some details of their implementation in Fortran for parallel and vector architectures. For endgames with five or more pieces it is important to use the symmetries of the chess board to reduce storage requirements, and we indicate briefly how this can be done for the vector architecture. Some timing comparisons are presented.
Nuclear Physics B - Proceedings Supplements | 1991
Rajan Gupta; C. Baillie; Ralph G. Brickner; G. Kilcup; A. Patel; S. Sharpe
Abstract Results for the spectrum and the F and D parameters are obtained with precision similar to that in the quenched approximation. Present data for m q ≥ m s show measurable effects due to vacuum polarization only in the pion-Nucleon Σ term suggesting that Σ sea ∼ Σ val . The lattice update is being done on the Connection Machine which is very well suited to simulate QCD with 2 flavors of Wilson fermions (with mass close to the strange quark) using HMCA on 16 3 × 32 lattices.
Nuclear Physics B - Proceedings Supplements | 1990
Ralph G. Brickner
Abstract Our collaboration has implemented Quantum Chromo-dynamics (QCD) on the massively-parallel Connection Machine, in ∗Lisp. The code uses dynamical Wilson fermions and the Hybrid Monte Carlo Algorithm (HMCA) to update the lattice. We describe our program, and give performance measurements for it. With no tuning or optimization, the code runs at approximately 1000 Mflops on a 64K CM-2.
International Journal of High Speed Computing | 1989
Ralph G. Brickner; Clive F. Baillie
We have implemented pure gauge Quantum Chromo-dynamics (QCD) on the massively-parallel Connection Machine, in *Lisp. We describe our program in some detail, and give performance measurements for it. With no tuning or optimization, the code runs at approximately 500 to 1000 Mflops on a 64K CM-2, depending on the VP ratio.
Computer Physics Communications | 1991
Ralph G. Brickner; Clive F. Baillie; S. Lennart Johnsson
We report on the status of code development for a simulation of quantum chromodynamics (QCD) with dynamical Wilson fermions on the Connection Machine model CM-2. Our original code, written in Lisp, gave performance in the near-GFLOPS range. We have rewritten the most time-consuming parts of the code in the low-level programming systems CMIS, including the matrix multiply and the communication. Current versions of the code run at approximately 3.6 GFLOPS for the fermion matrix inversion, and we expect the next version to reach or exceed 5 GFLOPS.
ieee international conference on high performance computing data and analytics | 1991
Christopher L. Barrett; Frank Bobrowicz; Ralph G. Brickner; Bradley A. Clark; Rajan Gupta; Ann Hayes; Harold E. Trease; Andrew B. White
This paper reports on supercomputing at Los Alamos National Laboratory. Los Alamos has sought to intertwine the fields of computer science and nuclear science, while influencing the design of the computers needed to solve its scientific problems. The complexity and size of the applications prevalent at the Laboratory have dictated a continuing, ever-increasing need for computers one to two orders of magnitude faster than what is currently available. There are currently four CRAY Y-MPs and four X-MPs serving as production computers. The Central Computing Facility (CCF) and the Laboratory Data Communications Center (LDCC), a three-story building completed in 1989, house the supercomputers and associated network servers.
Nuclear Physics B - Proceedings Supplements | 1991
Ralph G. Brickner
Abstract Our collaboration has been running Wilson fermion QCD simulations on various Connection Machines for over a year and a half. During this time, we have continually optimized our code for operations found in the fermion matrix inversion. Our current version of the matrix inversion is written almost entirely in CMIS (Connection Machine Instruction Set), and utilizes both high-speed arithmetic and multiwire “news” (nearest-neighbor communications). We present details of how these and other features of our code are implemented on the CM-2.