Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Biery is active.

Publication


Featured researches published by K. Biery.


Physical Review Letters | 1993

Measurement of bottom quark production in 1.8 TeV

F. Abe; M. Albrow; D. Amidei; C. Anway-Wiese; G. Apollinari; M. Atac; P. Auchincloss; P. Azzi; N. Bacchetta; A. Baden; W. Badgett; M. W. Bailey; A. Bamberger; de Barbaro P; A. Barbaro-Galtieri; V. E. Barnes; B. A. Barnett; G. Bauer; T. Baumann; F. Bedeschi; S. Behrends; S. Belforte; G. Bellettini; J. Bellinger; D. Benjamin; J. Benlloch; J. Bensinger; A. Beretvas; J. P. Berge; S. Bertolucci

We present a measurement of the [ital b]-quark cross section in 1.8 TeV [ital p]-[ital [bar p]] collisions recorded with the Collider Detector at Fermilab using muonic [ital b]-quark decays. In the central rapidity region ([vert bar][ital y][sup [ital b]][vert bar][lt]1.0), the cross section is 295[plus minus]21[plus minus]75 nb (59[plus minus]14[plus minus]15 nb) for [ital p][sub [ital T]][sup [ital b]][gt]21 GeV/[ital c] (29 GeV/[ital c]). Comparisons are made to previous measurements and next-to-leading order QCD calculations.


Journal of Physics: Conference Series | 2010

p\bar{p}

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.


Journal of Instrumentation | 2009

collisions using semileptonic decay muons

L Agostino; Gerry Bauer; Barbara Beccati; Ulf Behrens; J Berryhil; K. Biery; T. Bose; Angela Brett; James G Branson; E. Cano; H.W.K. Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; B. Dahmes; Christian Deldicque; E Dusinberre; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D. Hatton; J Laurens; C. Loizides; F. Meijers; E. Meschi; A. Meyer; R. Mommsen; R. Moser

The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.


Journal of Physics: Conference Series | 2010

The CMS data acquisition system software

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.


Journal of Physics: Conference Series | 2010

Commissioning of the CMS High Level Trigger

Gerry Bauer; B Beccati; U Behrens; K. Biery; Angela Brett; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; C Loizides; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh

The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.


Journal of Physics: Conference Series | 2010

Monitoring the CMS data acquisition system

Gerry Bauer; B Beccati; U Behrens; K. Biery; Angela Brett; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; M. Klute; J-F Laurens; C Loizides; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R K Mommsen; R Moser; V O'Dell

The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.


IEEE Transactions on Nuclear Science | 2012

The CMS online cluster: IT for a large data acquisition and control cluster

Gerry Bauer; Barbara Beccati; Ulf Behrens; K. Biery; James G Branson; Sebastian Bukowiec; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa Perez; Christian Deldicque; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Derek Hatton; Yi Ling Hwong; Constantin Loizides; Frank Ma; L. Masetti; F. Meijers; E. Meschi; Andreas Meyer; Remigius K. Mommsen; R. Moser; V. O'Dell; A. Oh; Luciano Orsini

Run control systems of modern high-energy particle physics experiments have requirements similar to those of todays Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERNs Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.


Physical Review Letters | 1995

The CMS event builder and storage system

F. Abe; H. Akimoto; A. Akopian; M. Albrow; Amendolia; D. Amidei; J. Antos; C. Anway-Wiese; S. Aota; G. Apollinari; T. Asakawa; W. Ashmanskas; M. Atac; P. Auchincloss; F. Azfar; P. Azzi-Bacchetta; N. Bacchetta; W. Badgett; S. Bagdasarov; M. W. Bailey; J. Bao; de Barbaro P; A. Barbaro-Galtieri; V. E. Barnes; B. A. Barnett; P. Bartalini; G. Bauer; T. Baumann; F. Bedeschi; S. Behrends

We establish the existence of the top quark using a 67pb{sup {minus}1} data sample of {ital p}{ovr bar}{ital p} collisions at {radical}{ital s}=1.8TeV collected with the Collider Detector at Fermilab (CDF). Employing techniques similar to those we previously published, we observe a signal consistent with {ital tb}{ovr bar}, but inconsistent with the background prediction by 4.8{sigma}. Additional evidence for the top quark is provided by a peak in the reconstructed mass distribution. We measure the top quark mass to be 176{plus_minus}8(stat){plus_minus}10(syst)GeV/{ital c}{sup 2}, and the {ital tt}{ovr bar} production cross section to be 6.8{sub {minus}2.4}{sup +3.6}pb.We establish the existence of the top quark using a 67 pb^-1 data sample of Pbar-P collisions at Sqrt(s) = 1.8 TeV collected with the Collider Detector at Fermilab (CDF). Employing techniques similar to those we previously published, we observe a signal consistent with t-tbar decay to WW b-bbar, but inconsistent with the background prediction by 4.8 sigma. Additional evidence for the top quark is provided by a peak in the reconstructed mass distribution. We measure the top quark mass to be 176 +/-8(stat) +/- 10(sys.) GeV/c^2, and the t-tbar production cross section to be 6.8 +3.6 -2.4 pb.


Journal of Physics: Conference Series | 2011

First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies

Yi Ling Hwong; Tim A. C. Willemse; Vincent J. J. Kusters; Gerry Bauer; Barbara Beccati; Ulf Behrens; K. Biery; Olivier Bouffet; James G Branson; Sebastian Bukowiec; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; A. Holzner; Derek Hatton; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; R. Moser; V. O'Dell; Luciano Orsini

The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system.


20th International Conference on Computing in High Energy and Nuclear Physics, CHEP 2013 | 2014

Observation of Top Quark Production in p-barp Collisions with the Collider Detector at Fermilab.

Jaroslav Zálešák; K. Biery; Gerald Guglielmo; A. Habig; R. Illingworth; S. M. S. Kasahara; Rick Kwarciany; Qiming Lu; Gennadiy Lukhanin; S. Magill; Mark Mathis; H. Meyer; Adam Moren; Leon Mualem; Mathew Muether; A. Norman; J. Paley; D. Perevalov; Luciano Piccoli; Ronald Rechenmacher; P. Shanahan; Louise Suter; Abigail Waldron

The NOνA experiment is a long-baseline neutrino experiment designed to make measurements to determine the neutrino mass hierarchy, neutrino mixing parameters and CP violation in the neutrino sector. In order to make these measurements the NOνA collaboration has designed a highly distributed, synchronized, continuous digitization and readout system that is able to acquire and correlate data from the Fermilab accelerator complex (NuMI), the NOνA near detector at the Fermilab site and the NOνA far detector which is located 810 km away at Ash River, MN. This system has unique properties that let it fully exploit the physics capabilities of the NOνA detector. The design of the NOνA DAQ system and its capabilities are discussed in this paper.

Collaboration


Dive into the K. Biery's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. Baumann

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Bauer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge