Peter S. Pacheco
University of San Francisco
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter S. Pacheco.
Neurocomputing | 2000
Peter S. Pacheco; Marcelo Camperi; Toshi Uchino
Abstract We present a software package for the simulation of very large neuronal networks on parallel computers. The package can be run on any system with an implementation of the Message Passing Interface standard. We also present some example results for a simple neuronal model in networks of up to a quarter of a million neurons. The full software package as well as usage and installation guidelines can be found in [http://cs.usfca.edu/neurosys].
Lecture Notes in Computer Science | 2003
Gregory D. Benson; Kai Long; Peter S. Pacheco
We experimentally evaluate several methods for implementing parallel computations that interleave a significant number of contiguous or strided writes to a local disk on Linux-based multiprocessor nodes. Using synthetic benchmark programs written with MPI and Pthreads, we have acquired detailed performance data for different application characteristics of programs running on dual processor nodes. In general, our results show that programs that perform a significant amount of I/O relative to pure computation benefit greatly from the use of threads, while programs that perform relatively little I/O obtain excellent results using only MPI. For a pure MPI approach, we have found that it is usually best to use two writing processes with mmap(). For Pthreads it is usually best to use two writing processes, write() for contiguous data, and writev() for strided data. Codes that use mmap() tend to benefit from periodic syncs of the data of the data to the disk, while codes that use write() or writev() tend to have better performance with few syncs. A straightforward use of ROMIO usually does not perform as well as these direct approaches for writing to the local disk.
Lecture Notes in Computer Science | 2003
Peter S. Pacheco; Patrick Miller; Jin Kim; Taylor Leese; Yuliya Zabiyaka
Object-oriented NeuroSys is a collection of programs for simulating very large networks of biologically accurate neurons on distributed memory parallel computers. It includes two principle programs: ooNeuroSys, a parallel program for solving the systems of ordinary differential equations arising from the modelling of large networks of interconnected neurons, and Neurondiz, a parallel program for visualizing the results of ooNeuroSys. Both programs are designed to be run on clusters and use the MPI library to obtain parallelism. ooNeuroSys also includes an easy-to-use Python interface. This interface allows neuroscientists to quickly develop and test complex neuron models. Both ooNeuroSys and Neurondiz have a design that allows for both high performance and relative ease of maintenance.
Archive | 1997
Peter S. Pacheco
Archive | 2010
Peter S. Pacheco
An Introduction to Parallel Programming | 2011
Peter S. Pacheco
An Introduction to Parallel Programming | 2011
Peter S. Pacheco
An Introduction to Parallel Programming | 2011
Peter S. Pacheco
An Introduction to Parallel Programming | 2011
Peter S. Pacheco
An Introduction to Parallel Programming | 2011
Peter S. Pacheco