Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where L. Lueking is active.

Publication


Featured researches published by L. Lueking.


Physical Review Letters | 1994

Enhanced leading production of D+- and D*+- in 250-GeV pi+- - nucleon interactions

G.A. Alves; D. R. Green; C. Darling; Roger L. Dixon; M.E. Streetman; M. Souza; D. Passmore; A. Napier; Z. Wu; T. Bernard; A. C. dos Reis; J. Astorga; L. Lueking; Robert Jedicke; W. J. Spalding; A. Rafatian; S. Amato; J. C. Anjos; D. J. Summers; S. Kwan; A. Wallace; Stephen B. Bracker; J. A. Appel; S. F. Takach; A. Santoro; R. H. Milburn; L. Cremaldi; D. Errede; J. M. De Miranda; P. E. Karchin

A leading charm meson is one with longitudinal momentum fraction, [ital x][sub [ital F]][gt]0, whose light quark (or antiquark) is of the same type as one of the quarks in the beam particles. We report on the production asymmetry, [ital A]=[[sigma](leading[minus][sigma](nonleading)]/[[sigma](leading)+[sigma](nonleading)] as a function of [ital x][sub [ital F]]. The data consist of 1500 fully reconstructed [ital D][sup [plus minus]] and [ital D][sup *[plus minus]] decays in Fermilab experiment E 769. We find a significant asymmetry for the production of charm quarks is not expected in perturbative quantum chromodynamics.


Prepared for International Conference on Computing in High Energy and Nuclear Physics (CHEP 07), Victoria, BC, Canada, 2-7 Sep 2007 | 2008

The CMS dataset bookkeeping service

Anzar Afaq; Andrew Joseph Dolgert; Y. Guo; C. D. Jones; S. Kosyakov; V. E. Kuznetsov; L. Lueking; D. Riley; Vijay Sekhri

The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.


high performance distributed computing | 2001

Distributed data access and resource management in the D0 SAM system

Igor Terekhov; R. Pordes; Victoria White; L. Lueking; L. Carpenter; H. Schellman; J. Trumbo; Siniša Veseli; Matt Vranicar; S. White

SAM (Sequential Access through Meta-data) is the data access and job management system for the D0 high energy physics experiment at Fermilab. The SAM system is being developed and used to handle the Petabyte-scale experiment data, accessed by hundreds of D0 collaborators scattered around the world. In this paper, we present solutions to some of the distributed data processing problems from the perspective of real experience dealing with mission-critical data. We concentrate on the distributed disk caching, resource management and job control. The system has elements of Grid computing and has features applicable to data-intensive computing in general.


Journal of Physics: Conference Series | 2010

Greatly improved cache update times for conditions data with Frontier/Squid

Dave Dykstra; L. Lueking

The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2003

The SAM-GRID project: architecture and plan

A. Baranovski; G. Garzoglio; H. Koutaniemi; L. Lueking; S. Patil; R. Pordes; A. Rana; Igor Terekhov; S. Veseli; J. Yu; R. Walker; V. White

Abstract SAM is a robust distributed file-based data management and access service, fully integrated with the D0 experiment at Fermilab and in phase of evaluation at the CDF experiment. The goal of the SAM-Grid project is to fully enable distributed computing for the experiments. The architecture of the project is composed of three primary functional blocks: the job handling, data handling, and monitoring and information services. Job handling and monitoring/information services are built on top of standard grid technologies (Condor-G/Globus Toolkit), which are integrated with the data handling system provided by SAM. The plan is devised to provide the users incrementally increasing levels of capability over the next 2 years.


Journal of Grid Computing | 2010

Distributed analysis in CMS

A. Fanfani; Anzar Afaq; Jose Afonso Sanches; Julia Andreeva; Giusepppe Bagliesi; L. A. T. Bauerdick; Stefano Belforte; Patricia Bittencourt Sampaio; K. Bloom; Barry Blumenfeld; D. Bonacorsi; C. Brew; Marco Calloni; Daniele Cesini; Mattia Cinquilli; G. Codispoti; Jorgen D’Hondt; Liang Dong; Danilo N. Dongiovanni; Giacinto Donvito; David Dykstra; Erik Edelmann; R. Egeland; P. Elmer; Giulio Eulisse; D Evans; Federica Fanzago; F. M. Farina; Derek Feichtinger; I. Fisk

The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.


grid computing | 2001

The D0 Experiment Data Grid - SAM

L. Lueking; L. Loebel-Carpenter; Wyatt Merritt; C. Moore; R. Pordes; Igor Terekhov; Siniša Veseli; Matt Vranicar; S. White; Victoria White

SAM (Sequential Access through Meta-data) is a data grid and data cataloging system developed for the DO high energy physics (HEP) experiment at Fermilab. Since March 2001, DO has been acquiring data in real time fiom the detector and will archive up to 1/2 Petabyte a year of simulated, raw detector and processed physics data. SAM catalogs the event and calibration data, provides distributed file delivery and caching services, and manages the processing and analysis jobs for the hundreds of DO collaborators around the world. The DO applications are data-intensive and the physics analysis programs execute on the order of 1-1000 cpuseconds per 250KByte of data. SAM manages the transfer of data between the archival storage systems through the globally distributed disk caches and delivers the data files to the users batch and interactive jobs. Additionally, SAM handles the user job requests and execution scheduling, and manages the use of the available compute, storage and network resources to implement experiment resource allocation policies. DO has been using early versions of the SAM system for two years for the management of the simulation and test data. The system is in production use with round the clock support. DO is a participant in the Particle Physics Data Grid (PPDG) project. Aspects of the ongoing SAM developments are in collaboration with the computer science groups and other experiments on PPDG. The DO emphasis is to develop the more sophisticated global grid job, resource management, authentication and information services needed to fully meet the needs of the experiment during the next 6 years of data taking and analysis.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 1991

Use of a transition radiation detector in a beam of high energy hadrons

D. Errede; M. Sheaff; H. Fenker; L. Lueking; P. Mantsch; Robert Jedicke

Abstract A transition radiation detector (TRD) comprised of 24 identical radiator plus xenon proportional chamber assemblies was built at Fermilab and operated successfully to separate pions from protons and kaons in a 250 GeV incident hadron beam ( γ π ∼ 1800) at rates up to, and sometimes even exceeding, 2 MHz. The detector was capable of identifying pions with efficiencies exceeding 90% and with ≤ 3% contamination due to the nearly equally copious protons and kaons in the beam.


Journal of Physics: Conference Series | 2010

The CMS DBS query language

V. E. Kuznetsov; D. Riley; Anzar Afaq; Vijay Sekhri; Y. Guo; L. Lueking

The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.


ADVANCED COMPUTING AND ANALYSIS TECHNIQUES IN PHYSICS RESEARCH: VII International Workshop; ACAT 2000 | 2002

SAM for D0—a fully distributed data access system

Igor Terekhov; Victoria White; L. Lueking; L. Carpenter; H. Schellman; J. Trumbo; Siniša Veseli; Matt Vranicar

The SAM (Sequential Access through Meta-data) system is being built as a distributed cache management and data access layer for the D0 experiment at Fermilab. The innovation of the project is the fully distributed architecture of the system which is designed to be deployable worldwide. It uses a central database for the meta-data and a hierarchy of CORBA servers for the actual data movement. SAM provides distributed disk caching, data routing and replication and therefore has components attractive to the Grid.

Collaboration


Dive into the L. Lueking's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Errede

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Cremaldi

University of Mississippi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Rafatian

University of Mississippi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge