John Hover
Brookhaven National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Hover.
The Journal of Neuroscience | 2007
Stefanie J. Otto; Sean R. McCorkle; John Hover; Cecilia Conaco; Jong Jin Han; Soren Impey; Gregory S. Yochum; John J. Dunn; Richard H. Goodman; Gail Mandel
The repressor element 1 (RE1) silencing transcription factor (REST) helps preserve the identity of nervous tissue by silencing neuronal genes in non-neural tissues. Moreover, in an epithelial model of tumorigenesis, loss of REST function is associated with loss of adhesion, suggesting the aberrant expression of REST-controlled genes encoding this property. To date, no adhesion molecules under REST control have been identified. Here, we used serial analysis of chromatin occupancy to perform genome-wide identification of REST-occupied target sequences (RE1 sites) in a kidney cell line. We discovered novel REST-binding motifs and found that the number of RE1 sites far exceeded previous estimates. A large family of targets encoding adhesion proteins was identified, as were genes encoding signature proteins of neuroendocrine tumors. Unexpectedly, genes considered exclusively non-neuronal also contained an RE1 motif and were expressed in neurons. This supports the model that REST binding is a critical determinant of neuronal phenotype.
grid computing | 2009
G. Garzoglio; Ian D. Alderman; Mine Altunay; Rachana Ananthakrishnan; Joe Bester; Keith Chadwick; Vincenzo Ciaschini; Yuri Demchenko; Andrea Ferraro; Alberto Forti; D.L. Groep; Ted Hesselroth; John Hover; Oscar Koeroo; Chad La Joie; Tanya Levshina; Zach Miller; Jay Packard; Håkon Sagehaug; Valery Sergeev; I. Sfiligoi; N Sharma; Frank Siebenlist; Valerio Venturi; John Weigand
In order to ensure interoperability between middleware and authorization infrastructures used in the Open Science Grid (OSG) and the Enabling Grids for E-science (EGEE) projects, an Authorization Interoperability activity was initiated in 2006. The interoperability goal was met in two phases: firstly, agreeing on a common authorization query interface and protocol with an associated profile that ensures standardized use of attributes and obligations; and secondly implementing, testing, and deploying on OSG and EGEE, middleware that supports the interoperability protocol and profile. The activity has involved people from OSG, EGEE, the Globus Toolkit project, and the Condor project. This paper presents a summary of the agreed-upon protocol, profile and the software components involved.
Journal of Physics: Conference Series | 2012
J Caballero; John Hover; P Love; G A Stewart
The ATLAS experiment at the CERN LHC is one of the largest users of grid computing infrastructure, which is a central part of the experiments computing operations. Considerable efforts have been made to use grid technology in the most efficient and effective way, including the use of a pilot job based workload management framework. In this model the experiment submits ‘pilot’ jobs to sites without payload. When these jobs begin to run they contact a central service to pick-up a real payload to execute. The first generation of pilot factories were usually specific to a single Virtual Organization (VO), and were bound to the particular architecture of that VOs distributed processing. A second generation provides factories which are more flexible, not tied to any particular VO, and provide new and improved features such as monitoring, logging, profiling, etc. In this paper we describe this key part of the ATLAS pilot architecture, a second generation pilot factory, AutoPyFactory. AutoPyFactory has a modular design and is highly configurable. It is able to send different types of pilots to sites and exploit different submission mechanisms and queue characteristics. It is tightly integrated with the PanDA job submission framework, coupling pilot flow to the amount of work the site has to run. It gathers information from many sources in order to correctly configure itself for a site and its decision logic can easily be updated. Integrated into AutoPyFactory is a flexible system for delivering both generic and specific job wrappers which can perform many useful actions before starting to run end-user scientific applications, e.g., validation of the middleware, node profiling and diagnostics, and monitoring. AutoPyFactory also has a robust monitoring system that has been invaluable in establishing a reliable pilot factory service for ATLAS.
international conference on acoustics, speech, and signal processing | 2009
Mónica F. Bugallo; H. Takai; M. Marx; David Bynum; John Hover
This paper reports on the latest efforts of the MARIACHI1 program at Stony Brook University, a unique endeavor that detects and studies ultra-high-energy cosmic rays. This is done by using a novel detection technique based on radar-like technology and traditional scintillator ground detectors. Using the phenomena of cosmic rays and meteors as vehicles to motivate research and educational activities, innovative hands-on modules in physics, engineering and cyberinfrastructure based on a learning by doing philosophy are offered to high school teachers and students. Participants at all levels are engaged in research projects, seminars, and workshops, where they will learn to use tools needed in MARIACHI by means of mobile technology.
international conference on acoustics, speech, and signal processing | 2008
Mónica F. Bugallo; H. Takafi; Michael Marx; David Bynum; John Hover
MARIACHI is a unique endeavor that integrates research at the frontier of our knowledge of the universe, with a broad program of training, education, advancement, and mentoring. Its scientific goal is to detect ultra-high-energy cosmic rays whose origin may provide insight into the evolution of the universe. The detection technique is novel and is based on radar-like technology (where signal processing plays a crucial role) and traditional scintillator ground detectors. The wide educational program flows from the research concept and involves students at all levels (high-school, undergraduate and graduate) working with a multidisciplinary team of scientists, engineers and educators.
Journal of Physics: Conference Series | 2010
G. Garzoglio; Ian D. Alderman; Mine Altunay; Rachana Ananthakrishnan; Joe Bester; Keith Chadwick; Vincenzo Ciaschini; Yuri Demchenko; Andrea Ferraro; Alberto Forti; D.L. Groep; Ted Hesselroth; John Hover; Oscar Koeroo; C La Joie; Tanya Levshina; Zachary Miller; Jay Packard; Håkon Sagehaug; I. Sfiligoi; N Sharma; S Timm; Frank Siebenlist; Valerio Venturi; J Weigand
The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be consumed on behalf of a specific group inside the organizational structure of the VO. Resources contact an access policies repository, centralized at each site, to grant the appropriate privileges for that VO group. Before the work in this paper, despite the commonality of the model, OSG and EGEE used different protocols for the communication between resources and the policy repositories. Hence, middleware developed for one Grid could not naturally be deployed on the other Grid, since the authorization module of the middleware would have to be enhanced to support the other Grids communication protocol. In addition, maintenance and support for different authorization call-out protocols represents a duplication of effort for our relatively small community. To address these issues, OSG and EGEE initiated a joint project on authorization interoperability. The project defined a common communication protocol and attribute identity profile for authorization call-out and provided implementation and integration with major Grid middleware. The activity had resonance with middleware development communities, such as the Globus Toolkit and Condor, who decided to join the collaboration and contribute requirements and software. In this paper, we discuss the main elements of the profile, its implementation, and deployment in EGEE and OSG. We focus in particular on the operations of the authorization infrastructures of both Grids.
Journal of Physics: Conference Series | 2014
P. Nilsson; F Barreiro Megino; J Caballero Bejar; K. De; John Hover; Peter Love; T. Maeno; R Medrano Llamas; R Walker; Torre Wenaus
The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins such that a new PanDA Pilot user only has to implement a set of prototyped methods in the plug-in classes, and provide a script that configures and runs the experiment-specific payload. We will give an overview of the Next Generation PanDA Pilot system and will present major features and recent improvements including live user payload debugging, data access via the Federated XRootD system, stage-out to alternative storage elements, support for the new ATLAS DDM system (Rucio), and an improved integration with glExec, as well as a description of the experiment-specific plug-in classes. The performance of the pilot system in processing LHC data on the OSG, LCG and Nordugrid infrastructures used by ATLAS will also be presented. We will describe plans for future development on the time scale of the next few years.
Journal of Physics: Conference Series | 2014
B Bockelman; J Caballero Bejar; J De Stefano; John Hover; R Quick; Scott Teige
The Open Science Grid encourages the concept of software portability: a users scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.
Journal of Physics: Conference Series | 2017
Ryan Taylor; Cristovao Jose Domingues Cordeiro; D. Giordano; John Hover; Tomas Kouba; Peter Love; Andrew McNab; J. Schovancova; Randall Sobie
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
Journal of Physics: Conference Series | 2017
B Bockelman; J Caballero Bejar; John Hover
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.