Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Graeme Stewart is active.

Publication


Featured researches published by Graeme Stewart.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2012

Charged particle tracking with the Timepix ASIC

Kazuyoshi Carvalho Akiba; M. Artuso; Ryan Badman; A. Borgia; Richard Bates; Florian Bayer; Martin van Beuzekom; J. Buytaert; Enric Cabruja; M. Campbell; P. Collins; Michael Crossley; R. Dumps; L. Eklund; D. Esperante; C. Fleta; A. Gallas; M. Gandelman; J. Garofoli; M. Gersabeck; V. V. Gligorov; H. Gordon; E.H.M. Heijne; V. Heijne; D. Hynds; M. John; A. Leflat; Lourdes Ferre Llin; X. Llopart; M. Lozano

A prototype particle tracking telescope was constructed using Timepix and Medipix ASIC hybrid pixel assemblies as the six sensing planes. Each telescope plane consisted of one 1.4 cm2 assembly, providing a 256 ×256 array of 55μm square pixels. The telescope achieved a pointing resolution of 2.4μm at the position of the device under test. During a beam test in 2009 the telescope was used to evaluate in detail the performance of two Timepix hybrid pixel assemblies; a standard planar 300μm thick sensor, and 285μm thick double sided 3D sensor. This paper describes a charge calibration study of the pixel devices, which allows the true charge to be extracted, and reports on measurements of the charge collection characteristics and Landau distributions. The planar sensor achieved a best resolution of 4.0±0.1μm for angled tracks, and resolutions of between 4.4 and 11μm for perpendicular tracks, depending on the applied bias voltage. The double sided 3D sensor, which has significantly less charge sharing, was found to have an optimal resolution of 9.0±0.1μm for angled tracks, and a resolution of 16.0±0.2μm for perpendicular tracks. Based on these studies it is concluded that the Timepix ASIC shows an excellent performance when used as a device for charged particle tracking.


Journal of Physics: Conference Series | 2015

Development of a Next Generation Concurrent Framework for the ATLAS Experiment

P. Calafiura; Walter Lampl; C. Leggett; D. Malon; Graeme Stewart; Ben Wynne

The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible.In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.


Acta Crystallographica Section E-structure Reports Online | 2009

3-Fluoro­benzoic acid–4-acetyl­pyridine (1/1) at 100 K

Gavin A. Craig; Lynne H. Thomas; Martin Adam; Angela Ballantyne; Andrew G. Cairns; Stephen C. Cairns; Gary Copeland; Clifford Harris; Eve McCalmont; Robert McTaggart; Alan R. G. Martin; Sarah Palmer; Jenna Quail; Harriet Saxby; Duncan J. Sneddon; Graeme Stewart; Neil C. Thomson; Alex Whyte; Chick C. Wilson; Andrew Parkin

In the title compound, C7H5FO2·C7H7NO, a moderate-strength hydrogen bond is formed between the carboxyl group of one molecule and the pyridine N atom of the other. The benzoic acid molecule is observed to be disordered over two positions with the second orientation only 4% occupied. This disorder is also reflected in the presence of diffuse scattering in the diffraction pattern.


Proceedings of 38th International Conference on High Energy Physics — PoS(ICHEP2016) | 2017

Managing Asynchronous Data in ATLAS's Concurrent Framework

John Baines; V. Tsulaia; P. Calafiura; J. Cranshaw; Peter van Gemmeren; D. Malon; T. Bold; C. Leggett; Benjamin Wynne; A. Dotti; Scott Snyder; Graeme Stewart; S. Farrell

In order to be able to make effective use of emerging hardware, where the amount of memory available to any CPU is rapidly decreasing as the core count continues to rise, ATLAS has begun a migration to a concurrent, multi-threaded software framework, known as AthenaMT. Significant progress has been made in implementing AthenaMT - we can currently run realistic Geant4 simulations on massively concurrent machines. The migration of realistic prototypes of reconstruction workflows is more difficult, given the large amount of legacy code and the complexity and challenges of reconstruction software. These types of workflows, however, are the types that will most benefit from the memory reduction features of a multi-threaded framework. One of the challenges that we will report on in this paper is the re-design and implementation of several key asynchronous technologies whose behaviour is radically different in a concurrent environment than in a serial one, namely the management of Conditions data and the Detector Description, and the handling of asynchronous notifications (such as FileOpen). Since asynchronous data, such as Conditions or detector alignments, has a lifetime different than that of event data, it cannot be kept in the Event Store. However, multiple instances of the data need to be simultaneously accessible, such that concurrent events that are, for example, processing conditions data from different validity intervals can be executed concurrently in an efficient manner with low memory overhead, and without multi-threaded conflicts.


Journal of Physics: Conference Series | 2017

ATLAS software stack on ARM64

Joshua Wyatt Smith; Graeme Stewart; Arnulf Quadt; Rolf Seuster

This paper reports on the port of the ATLAS software stack onto new prototype ARM64 servers. This included building the “external” packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adjustments. A few additional modifications were needed to account for the different operating system, Ubuntu instead of Scientific Linux 6 / CentOS7. Selected results from the validation of the physics outputs on these ARM 64-bit servers will be shown. CPU, memory and IO intensive benchmarks using ATLAS specific environment and infrastructure have been performed, with a particular emphasis on the performance vs. energy consumption.


Journal of Physics: Conference Series | 2017

AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

C. Leggett; V. Tsulaia; P. Calafiura; John Baines; Peter van Gemmeren; D. Malon; T. Bold; Benjamin Wynne; Scott Snyder; Graeme Stewart; S. Farrell; E. Ritsch

ATLASs current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.


Journal of Physics: Conference Series | 2015

A study of dynamic data placement for ATLAS distributed data management

T. A. Beermann; Graeme Stewart; Peter Maettig

This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is signicant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a better overall utilisation of computing resources and to reduce waiting time for jobs before they can run on the grid. This method is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed les, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. This study evaluates simple prediction methods as well as more complex methods like neural networks. Based on the outcome of the predictions a redistribution algorithm deletes unused replicas and adds new replicas for potentially popular datasets. Finally, a grid simulator is used to examine the eects of the redistribution. The simulator replays workload on dierent data distributions while measuring the job waiting time and site usage. The study examines how the average waiting time is aected by the amount of data that is moved, how it diers for the various forecasting methods and how that compares to the optimal data distribution.


Journal of Instrumentation | 2011

Comparison of a CCD and an APS for soft X-ray diffraction

Graeme Stewart; R. L. Bates; Andrew Blue; A. Clark; S.S Dhesi; D. Maneuski; Julien Marchal; P. Steadman; N. Tartoni; R. Turchetta

We compare a new CMOS Active Pixel Sensor (APS) to a Princeton Instruments PIXIS-XO: 2048B Charge Coupled Device (CCD) with soft X-rays tested in a synchrotron beam line at the Diamond Light Source (DLS). Despite CCDs being established in the field of scientific imaging, APS are an innovative technology that offers advantages over CCDs. These include faster readout, higher operational temperature, in-pixel electronics for advanced image processing and reduced manufacturing cost. The APS employed was the Vanilla sensor designed by the MI3 collaboration and funded by an RCUK Basic technology grant. This sensor has 520 x 520 square pixels, of size 25 μm on each side. The sensor can operate at a full frame readout of up to 20 Hz. The sensor had been back-thinned, to the epitaxial layer. This was the first time that a back-thinned APS had been demonstrated at a beam line at DLS. In the synchrotron experiment soft X-rays with an energy of approximately 708 eV were used to produce a diffraction pattern from a permalloy sample. The pattern was imaged at a range of integration times with both sensors. The CCD had to be operated at a temperature of -55°C whereas the Vanilla was operated over a temperature range from 20°C to -10°C. We show that the APS detector can operate with frame rates up to two hundred times faster than the CCD, without excessive degradation of image quality. The signal to noise of the APS is shown to be the same as that of the CCD at identical integration times and the response is shown to be linear, with no charge blooming effects. The experiment has allowed a direct comparison of back thinned APS and CCDs in a real soft x-ray synchrotron experiment.


arXiv: Distributed, Parallel, and Cluster Computing | 2010

ScotGrid: Providing an effective distributed Tier-2 in the LHC era

Samuel Cadellin Skipsey; David Ambrose-Griffith; Greig Cowan; M. Kenyon; Orlando Richards; Phil Roffe; Graeme Stewart

ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow, currently providing more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising the centres to meet these. We describe how we have coped with different operational models at the sites, especially concerning those deviations from the usual model in the UK. We show how ScotGrid has successfully provided an infrastructure for ATLAS and LHCb Monte Carlo production, and discuss the improvements for user analysis work that we have investigated. Finally, although these Tier-2 resources are pledged to the whole VO, we have established close links with our local user communities as being the best way to ensure that the Tier-2 functions effectively as a part of the LHC grid computing framework. In general conclusion, we find that effective communication is the most important component of a well-functioning distributed Tier-2.


international conference on large scale scientific computing | 2009

Enabling cutting-edge semiconductor simulation through grid technology

Asen Asenov; Dave Reid; Campbell Millar; S. Roy; Gareth Roy; Richard O. Sinnott; Gordon Stewart; Graeme Stewart

The progressive CMOS scaling drives the success of the global semiconductor industry Detailed knowledge of transistor behaviour is necessary to overcome the many fundamental challenges faced by chip and systems designers Grid technology has enabled the constantly increasing statistical variability introduced by discreteness of charge and matter to be examined in unprecedented detail Over 200,000 transistors subject to random discrete dopants variability have been simulated, the results of which provide detailed insight into underlying physical processes This paper outlines recent scientific results of the nanoCMOS project, and describes the way in which the scientific goals have been reflected in the grid-based e-infrastructure.

Collaboration


Dive into the Graeme Stewart's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dave Reid

University of Glasgow

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Baines

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

M. Kenyon

University of Glasgow

View shared research outputs
Top Co-Authors

Avatar

S. Roy

University of Glasgow

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Leggett

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge