Jon Bakken
Fermilab
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jon Bakken.
The Astronomical Journal | 1998
James E. Gunn; Michael A. Carr; C. Rockosi; M. Sekiguchi; K. Berry; Brian R. Elms; E. de Haas; Željko Ivezić; Gillian R. Knapp; Robert H. Lupton; George Pauls; R. Simcoe; R. Hirsch; D. Sanford; Shu I. Wang; D. G. York; Frederick H. Harris; J. Annis; L. Bartozek; William N. Boroski; Jon Bakken; M. Haldeman; Stephen M. Kent; Scott Holm; Donald J. Holmgren; D. Petravick; Angela Prosapio; Ron Rechenmacher; Mamoru Doi; Masataka Fukugita
We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix CCDs (24 μm pixels) with an effective imaging area of 720 cm2 and an astrometric array that uses 24 400 × 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay–and–integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 25 wide. This paper presents engineering and technical details of the camera.
Archive | 2004
Jon Bakken; I. Fisk; Patrick Fuhrmann; Tigran Mkrtchyan; Timur Perelmutov; D. Petravick; M. Ernst; Desy
The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It increases resiliency by insulating clients from storage and network failures, and facilitates file sharing and network traffic shaping. This new storage service is implemented as a Grid Storage Element (SE). It consists of dCache, jointly developed by DESY and Fermilab, as the core storage system and an implementation of the Storage Resource Manager (SRM), that together allow both local and Grid based access to the mass storage facilities. It provides advanced accessing and distributing collaboration data. USCMS is using this system both as Disk Resource Manager at the Tier-1 center and at multiple Tier-2 sites, and as Hierarchical Resource Manager with Enstore as tape back-end at the Fermilab CMS Tier-1 center. It is used for providing shared managed disk pools at sites for streaming data between the CERN Tier-0, the Fermilab Tier-1 and U.S. Tier-2 centers. Applications can reserve space for a time period, ensuring space availability when the application runs. Worker nodes without WAN connectivity can trigger file replication from a central repository to the local SE and then access data using POSIX-like file system semantics via the LAN. Moving the SE functionality off the worker nodes reduces load and improves reliability of the compute farm elements significantly.
ieee conference on mass storage systems and technologies | 2003
Jon Bakken; Eileen Berman; Chih-Hao Huang; Alexander Moibenko; D. Petravick; Michael Zalokar
Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: the Enstore mass storage system, DCache distributed data cache, FTP and grid FTP for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experimental data acquisition systems. It also allows access to data in the grid framework.
ieee conference on mass storage systems and technologies | 2005
Gene Oleynik; Bonnie Alcorn; Wayne Baisley; Jon Bakken; David Berg; Eileen Berman; Chih-Hao Huang; Terry Jones; Robert Kennedy; A. Kulyavtsev; Alexander Moibenko; Timur Perelmutov; D. Petravick; Vladimir Podstavkov; George Szmuksta; Michael Zalokar
Fermilab provides a multi-petabyte scale mass storage system for high energy physics (HEP) experiments and other scientific endeavors. We describe the scalability aspects of the hardware and software architecture that were designed into the mass storage system to permit us to scale to multiple petabytes of storage capacity, manage tens of terabytes per day in data transfers, support hundreds of users, and maintain data integrity. We discuss in detail how we scale the system over time to meet the ever-increasing needs of the scientific community, and relate our experiences with many of the technical and economic issues related to scaling the system. Since the 2003 MSST conference, the experiments at Fermilab have generated more than 1.9 PB of additional data. We present results on how this system has scaled and performed for the Fermilab CDF and D0 Run II experiments as well as other HEP experiments and scientific endeavors.
ieee nuclear science symposium | 2006
Abhishek Singh Rana; F. Würthwein; Timur Perelmutov; Robert Kennedy; Jon Bakken; Ted Hesselroth; I. Fisk; Patrick Fuhrmann; M. Ernst; Markus Lorch; Dane Skow
We introduce gPLAZMA (grid-aware PLuggable Authorization MAnagement) for dCache/SRM in this publication. Our work is motivated by a need for fine-grained security (Role Based Access Control or RBAC) in storage systems on global data grids, and utilizes VOMS extended X.509 certificate specification for defining extra attributes (FQANs), based on RFC3281. Our implementation, gPLAZMA in dCache, introduces storage authorization callouts for SRM and GridFTP. It allows using different authorization mechanisms simultaneously, fine-tuned with switches and priorities of mechanisms. Of the four mechanisms currently supported, one is an integration with RBAC services in the open science grid (OSG) USCMS/USATLAS Privilege Project, others are built-in as a lightweight suite of services (gPLAZMA lite authorization services suite) including the legacy dcache.kpwd file, as well as the popular grid-mapfile, augmented with a gPLAZMALite specific RBAC mechanism. Based on our current work, we also outline a list of future tasks. This work was undertaken as collaboration between PPDG Common project, OSG Privilege project, and the dCache/SRM groups at DESY, FNAL and UCSD.
ieee npss real time conference | 1999
J. Annis; Jon Bakken; Donald J. Holmgren; D. Petravick; Ron Rechenmacher
The Sloan Digital Sky Survey will systematically map one-quarter of the sky, producing detailed images in five color bands and determining the positions and absolute brightnesses of more than 100 million celestial objects. It will also measure the redshifts of a million selected galaxies and of 100,000 quasars, yielding a three-dimensional map of the Universe through a volume one hundred times larger than that explored to date. The SDSS collaboration is currently in the process of commissioning the 2.5-meter survey telescope. We describe the data acquisition system used to record the survey data. This system consists of twelve single board computers and their associated interfaces to the camera and spectrograph CCD electronics, to tape drives, and to online video displays, distributed among several VME crates. A central UNIX computer connected to the VME crates via a vertical bus adapter coordinates the system and provides the interface to telescope operations. We briefly discuss results from the observing runs to date and plans for the archiving and distribution of data.
The Astronomical Journal | 1999
Xiaohui Fan; Michael A. Strauss; Donald P. Schneider; James E. Gunn; Robert H. Lupton; Brian Yanny; Scott F. Anderson; John Anderson; James Annis; Neta A. Bahcall; Jon Bakken; Steven Bastian; Eileen Berman; William N. Boroski; Charlie Briegel; John W. Briggs; J. Brinkmann; Michael A. Carr; Patrick L. Colestock; A. J. Connolly; James H. Crocker; István Csabai; Paul C. Czarapata; John Eric Davis; Mamoru Doi; Brian R. Elms; Michael L. Evans; Glenn R. Federwitz; Joshua A. Frieman; Masataka Fukugita
Archive | 2004
Jon Bakken; Eileen Berman; Chih-Hao Huang; Alexander Moibenko; D. Petravick; Michael Zalokar
Archive | 2010
Jon Bakken; Artur Barczyk; Alan Blatecky; Amber Boehnlein; Rich Carlson; Sergei Chekanov; Steve Cotter; Les Cottrell; Glen Crawford; Matt Crawford; Eli Dart; Vince Dattoria; M. Ernst; I. Fisk; Robert Gardner; Bill Johnston; Steve Kent; Stephan Lammel; Stewart C. Loken; Joe Metzger; Richard Mount; Thomas Ndousse-Fetter; Harvey Newman; Jennifer M. Schopf; Yukiko Sekine; Alan Stone; Brian Tierney; Craig E. Tull; Jason Zurawski