Gene Oleynik
Fermilab
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gene Oleynik.
European Neuropsychopharmacology | 1991
R. Pordes; John Anderson; David Berg; Eileen Berman; D. Brown; T. Dorries; Bryan MacKinnon; J. Meadows; C. Moore; Tom Nicinski; Gene Oleynik; D. Petravick; Ron Rechenmacher; Gary Sergey; David Slimmer; J. Streets; M. Vittone; M. Votava; N. Wilcer; Vicky White
We report on the status of the PAN-DA data acquisition system presented at the last Real Time Conference. Since that time, PAN-DA has been successfully used in the fixed target program at Fermilab. We also report on the plans and strategies for development of a new data acquisition system for the next generation of fixed target experiments at Fermilab. 10 refs., 3 figs.
ieee conference on mass storage systems and technologies | 2007
Lana Abadie; Paolo Badino; J.-P. Baud; Ezio Corso; M. Crawford; S. De Witt; Flavia Donno; A. Forti; Ákos Frohner; Patrick Fuhrmann; G. Grosdidier; Junmin Gu; Jens Jensen; B. Koblitz; Sophie Lemaitre; Maarten Litmaath; D. Litvinsev; G. Lo Presti; L. Magnoni; T. Mkrtchan; Alexander Moibenko; Rémi Mollon; Vijaya Natarajan; Gene Oleynik; Timur Perelmutov; D. Petravick; Arie Shoshani; Alex Sim; David Smith; M. Sponza
Storage management is one of the most important enabling technologies for large-scale scientific investigations. Having to deal with multiple heterogeneous storage and file systems is one of the major bottlenecks in managing, replicating, and accessing files in distributed environments. Storage resource managers (SRMs), named after their Web services control protocol, provide the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. They call on transport services to bring files into their space transparently and provide effective sharing of files. SRMs are based on a common specification that emerged over time and evolved into an international collaboration. This approach of an open specification that can be used by various institutions to adapt to their own storage systems has proven to be a remarkable success - the challenge has been to provide a consistent homogeneous interface to the grid, while allowing sites to have diverse infrastructures. In particular, supporting optional features while preserving interoperability is one of the main challenges we describe in this paper. We also describe using SRM in a large international high energy physics collaboration, called WLCG, to prepare to handle the large volume of data expected when the Large Hadron Collider (LHC) goes online at CERN. This intense collaboration led to refinements and additional functionality in the SRM specification, and the development of multiple interoperating implementations of SRM for various complex multi- component storage systems.
Journal of Physics: Conference Series | 2007
Andrew Baranovski; Shishir Bharathi; John Bresnahan; Ann L. Chervenak; Ian T. Foster; Dan Fraser; Timothy Freeman; Dan Gunter; Keith Jackson; Kate Keahey; Carl Kesselman; David E. Konerding; Nick LeRoy; Mike Link; Miron Livny; Neill Miller; Robert Miller; Gene Oleynik; Laura Pearlman; Jennifer M. Schopf; Robert Schuler; Brian Tierney
Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science.
IEEE Transactions on Nuclear Science | 1989
David Berg; P. Heinicke; Bryan MacKinnon; Tom Nicinski; Gene Oleynik
The Software Components Group pSOS operating system kernel and pROBE debugger have been extended to support the Fermilab PAN-DA data acquisition system for a variety of Motorola 680xx-based VME and FASTBUS modules. These extensions include: a multitasking, reentrant implementation of Microtec C/Pascal; a serial port driver for terminal I/O and data transfer; a message reporting facility; and enhanced debugging tools. An overview of the system is given, and the run-time-library reentrancy and process context, the serial port driver, the SYS68K Message Reporter System subroutine package, and the enhanced debugging tools are discussed. >
IEEE Transactions on Nuclear Science | 1989
D. Petravick; David Berg; Eileen Berman; Mark Bernett; Penelope Constanta-Fanourakis; T. Dorries; Margaret Haire; Ken Kaczar; Bryan MacKinnon; C. Moore; Tom Nicinski; Gene Oleynik; R. Pordes; Gary Sergey; Margaret Votava; Vicky White
The VAXONLINE data acquisition package has been extended to include a VME-based data path. The resulting environment, PAN-DA, provides a high throughput for logging, filtering, formatting, and selecting events. The authors describe the history and rationale of the system, the VME hardware modules, PAN-DA systems coordination, and system connectivity. >
Journal of Physics: Conference Series | 2008
Andrew Baranovski; K Beattie; Shishir Bharathi; J Boverhof; John Bresnahan; Ann L. Chervenak; Ian T. Foster; Timothy Freeman; Dan Gunter; Kate Keahey; Carl Kesselman; Rajkumar Kettimuthu; Nick LeRoy; Mike Link; Miron Livny; Ravi K. Madduri; Gene Oleynik; Laurie Anne Pearlman; Robert Schuler; Brian Tierney
The Center for Enabling Distributed Petascale Science is developing serviced to enable researchers to manage large, distributed datasets. The center projects focus on three areas: tools for reliable placement of data, issues involving failure detection and failure diagnosis in distributed systems, and scalable services that process requests to access data
ieee conference on mass storage systems and technologies | 2005
Gene Oleynik; Bonnie Alcorn; Wayne Baisley; Jon Bakken; David Berg; Eileen Berman; Chih-Hao Huang; Terry Jones; Robert Kennedy; A. Kulyavtsev; Alexander Moibenko; Timur Perelmutov; D. Petravick; Vladimir Podstavkov; George Szmuksta; Michael Zalokar
Fermilab provides a multi-petabyte scale mass storage system for high energy physics (HEP) experiments and other scientific endeavors. We describe the scalability aspects of the hardware and software architecture that were designed into the mass storage system to permit us to scale to multiple petabytes of storage capacity, manage tens of terabytes per day in data transfers, support hundreds of users, and maintain data integrity. We discuss in detail how we scale the system over time to meet the ever-increasing needs of the scientific community, and relate our experiences with many of the technical and economic issues related to scaling the system. Since the 2003 MSST conference, the experiments at Fermilab have generated more than 1.9 PB of additional data. We present results on how this system has scaled and performed for the Fermilab CDF and D0 Run II experiments as well as other HEP experiments and scientific endeavors.
European Neuropsychopharmacology | 1991
M. Votava; W. Bliss; S. Cutts-Bone; C. Debaun; F. Donno-Raffaelli; R. Herber; K. Leininger; B. Lindgren; J. Nicholls; Gene Oleynik; D. Petravick; R. Pordes; L. Sexton; J. Streets; B. Troemel; L. Udumula; M. Wicks
The need to provide central support an distribution of many software packages across a variety of UNIX platforms at Fermilab has led to development of a methodology, UPS, for the packaging, maintenance, and distribution of our software. UPS has now been implemented and in use for almost a year on four different UNIX platforms. This paper discusses the goals of the software, implementation of the product, and experiences in its use. 8 refs., 3 figs.
IEEE Transactions on Nuclear Science | 1991
David Berg; Eileen Berman; Bryan MacKinnon; Tom Nicinski; Gene Oleynik; D. Petravick; R. Pordes; Gary Sergey; D. Slimmer; J. Streets; W. Kowald
We report on software developed in support of the Fermilab FASTBUS Smart Crate Controller. This software includes a full suite of diagnostics, support for FASTBUS Standard Routines, and extended software to allow communication over the RS-232 and Ethernet ports. The communication software supports remote procedure call execution from a host VAX or Unix system. The software supported on the FSCC forms part of the PAN-DA software system which supports the functions of front end readout controllers and event builders in multiprocessor, multilevel. distributed data acquisition systems.
IEEE Transactions on Nuclear Science | 1996
Gene Oleynik; J. Engelfried; L. Mengel; C. Moore; V. O'dell; R. Pordes; A. Semenchenko; D. Slimmer
DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system.