Thomas LeCompte
Argonne National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas LeCompte.
Physical Review D | 2011
Thomas LeCompte; Stephen P. Martin
Many theoretical and experimental results on the reach of the Large Hadron Collider are based on the mSUGRA-inspired scenario with universal soft supersymmetry breaking parameters at the apparent gauge coupling unification scale. We study signals for supersymmetric models in which the sparticle mass range is compressed compared to mSUGRA, using cuts like those employed by ATLAS for 2010 data. The acceptance and the cross-section times acceptance are found for several model lines that employ a compression parameter to smoothly interpolate between the mSUGRA case and the extreme case of degenerate gaugino masses at the weak scale. For models with moderate compression, the reach is not much worse, and can even be substantially better, than the mSUGRA case. For very compressed mass spectra, the acceptances are drastically reduced, especially when a more stringent effective mass cut is chosen.
Physical Review D | 2012
Thomas LeCompte; Stephen P. Martin
We study the reach of the Large Hadron Collider with 1 fb⁻¹ of data at √s=7 TeV for several classes of supersymmetric models with compressed mass spectra, using jets and missing transverse energy cuts like those employed by ATLAS for summer 2011 data. In the limit of extreme compression, the best limits come from signal regions that do not require more than 2 or 3 jets and that remove backgrounds by requiring more missing energy rather than a higher effective mass.
Journal of Physics: Conference Series | 2015
J. T. Childers; Thomas D. Uram; Thomas LeCompte; Michael E. Papka; Doug Benjamin
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonnes Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Computer Physics Communications | 2017
J. T. Childers; Thomas D. Uram; Thomas LeCompte; Michael E. Papka; Doug Benjamin
Abstract As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Journal of Physics: Conference Series | 2015
Thomas D. Uram; J. T. Childers; Thomas LeCompte; Michael E. Papka; Doug Benjamin
HEPs demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
Proceedings of the Fourth International Workshop on HPC User Support Tools | 2017
J. Taylor Childers; Thomas D. Uram; Doug Benjamin; Thomas LeCompte; Michael E. Papka
Large experimental collaborations, such as those at the Large Hadron Collider at CERN, have developed large job management systems running hundreds of thousands of jobs across worldwide computing grids. HPC facilities are becoming more important to these data-intensive workflows and integrating them into the experiment job management system is non-trivial due to increased security and heterogeneous computing environments. The following article describes a common edge service developed and deployed on DOE supercomputers for both small users and large collaborations. This edge service provides a uniform interaction across many different supercomputers. Example users are described with the related performance.
Journal of Physics: Conference Series | 2014
Thomas D. Uram; Thomas LeCompte; Doug Benjamin
A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.
Physical Review Letters | 2007
A. Abulencia; Darin Acosta; J. Adelman; T. Affolder; T. Akimoto; Robert Blair; K. L. Byrum; S. E. Kuhlmann; Thomas LeCompte; Lawrence Nodulman; Judith G. Proudfoot; M. Tanaka; R. G. Wagner; A. B. Wicklund
Physics Letters B | 2006
A. Abulencia; Darin Acosta; J. Adelman; T. Affolder; T. Akimoto; Robert Blair; K. L. Byrum; S. E. Kuhlmann; Thomas LeCompte; Lawrence Nodulman; Judith G. Proudfoot; M. Tanaka; R. G. Wagner; A. B. Wicklund
Physical Review Letters | 2006
A. Abulencia; Darin Acosta; J. Adelman; T. Affolder; T. Akimoto; Robert Blair; K. L. Byrum; S. E. Kuhlmann; Thomas LeCompte; Lawrence Nodulman; Judith G. Proudfoot; M. Tanaka; R. G. Wagner; A. B. Wicklund