Anthony Stell
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anthony Stell.
The Journal of Clinical Endocrinology and Metabolism | 2015
Felix Beuschlein; Jens Weigel; Wolfgang Saeger; Matthias Kroiss; Vanessa Wild; Fulvia Daffara; Rosella Libé; Arianna Ardito; Abir Al Ghuzlan; Marcus Quinkler; Andrea Oßwald; Cristina L. Ronchi; Ronald R. de Krijger; Richard A. Feelders; Jens Waldmann; Holger S. Willenberg; Timo Deutschbein; Anthony Stell; Martin Reincke; Mauro Papotti; Eric Baudin; Frédérique Tissier; Harm R. Haak; Paola Loli; Massimo Terzolo; Bruno Allolio; Hans Müller; Martin Fassnacht
BACKGROUND Recurrence of adrenocortical carcinoma (ACC) even after complete (R0) resection occurs frequently. OBJECTIVE The aim of this study was to identify markers with prognostic value for patients in this clinical setting. DESIGN, SETTING, AND PARTICIPANTS From the German ACC registry, 319 patients with the European Network for the Study of Adrenal Tumors stage I-III were identified. As an independent validation cohort, 250 patients from three European countries were included. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Clinical, histological, and immunohistochemical markers were correlated with recurrence-free (RFS) and overall survival (OS). RESULTS Although univariable analysis within the German cohort suggested several factors with potential prognostic power, upon multivariable adjustment only a few including age, tumor size, venous tumor thrombus (VTT), and the proliferation marker Ki67 retained significance. Among these, Ki67 provided the single best prognostic value for RFS (hazard ratio [HR] for recurrence, 1.042 per 1% increase; P < .0001) and OS (HR for death, 1.051; P < .0001) which was confirmed in the validation cohort. Accordingly, clinical outcome differed significantly between patients with Ki67 <10%, 10-19%, and ≥20% (for the German cohort: median RFS, 53.2 vs 31.6 vs 9.4 mo; median OS, 180.5 vs 113.5 vs 42.0 mo). Using the combined cohort prognostic scores including tumor size, VTT, and Ki67 were established. Although these scores discriminated slightly better between subgroups, there was no clinically meaningful advantage in comparison with Ki67 alone. CONCLUSION This largest study on prognostic markers in localized ACC identified Ki67 as the single most important factor predicting recurrence in patients following R0 resection. Thus, evaluation of Ki67 indices should be introduced as standard grading in all pathology reports of patients with ACC.
high performance computing systems and applications | 2005
Anthony Stell; Richard O. Sinnott; John P. Watt
The widespread use of grid technology and distributed compute power, with all its inherent benefits, will only be established if the use of that technology can be guaranteed efficient and secure. The predominant method for currently enforcing security is through the use of public key infrastructures (PKI) to support authentication and the use of access control lists (ACL) to support authorisation. These systems alone do not provide enough fine-grained control over the restriction of user rights, necessary in a dynamic grid environment. This paper compares the implementation and experiences of using the current standard for grid authorisation with Globus - the grid security infrastructure (GSI) - with the role-based access control (RBAC) authorisation infrastructure PERMIS. The suitability of these security infrastructures for integration with regard to existing grid technology is presented based upon experiences within the JISC-funded DyVOSE project.
cluster computing and the grid | 2008
Richard O. Sinnott; David W. Chadwick; T. Doherty; David B. Martin; Anthony Stell; Gordon Stewart; Linying Su; John P. Watt
Grids allow for collaborative e-Research to be undertaken, often across institutional and national boundaries. Typically this is through the establishment of virtual organizations (VOs) where policies on access and usage of resources across partner sites are defined and subsequently enforced. For many VOs, these agreements have been lightweight and erred on the side of flexibility with minimal constraints on the kinds of jobs a user is allowed to run or the amount of resources that can be consumed. For many new domains such as e-Health, such flexibility is simply not tenable. Instead, precise definitions of what jobs can be run, and what data can be accessed by who need to be defined and enforced by sites. The role based access control model (KBAC) provides a well researched paradigm for controlling access to large scale dynamic VOs. However, the standard RBAC model assumes a single domain with centralised role management. When RBAC is applied to VOs, it does not specify how or where roles should be defined or made known to the distributed resource sites (who are always deemed to be autonomous to make access control decisions). Two main possibilities exist based on either a centralized or decentralized approach to VO role management. We present the advantages and disadvantages of the centralized and decentralized role models and describe how we have implemented them in a range of security focused e-Research domains at the National e-Science Centre (NeSC) at the University of Glasgow.
grid computing | 2005
Richard O. Sinnott; Anthony Stell; David W. Chadwick; O. Otenko
The widespread acceptance and uptake of Grid technology can only be achieved if it can be ensured that the security mechanisms needed to support Grid based collaborations are at least as strong as local security mechanisms. The predominant way in which security is currently addressed in the Grid community is through Public Key Infrastructures (PKI) to support authentication. Whilst PKIs address user identity issues, authentication does not provide fine grained control over what users are allowed to do on remote resources (authorisation). The Grid community have put forward numerous software proposals for authorisation infrastructures such as AKENTI [1], CAS [2], CARDEA [3], GSI [4], PERMIS [5,6,7] and VOMS [8,9]. It is clear that for the foreseeable future a collection of solutions will be the norm. To address this, the Global Grid Forum (GGF) have proposed a generic SAML based authorisation API which in principle should allow for fine grained control for authorised access to any Grid service. Experiences in applying and stress testing this API from a variety of different application domains are essential to give insight into the practical aspects of large scale usage of authorisation infrastructures. This paper presents experiences from the DTI funded BRIDGES project [10] and the JISC funded DyVOSE project [11] in using this API with Globus version 3.3 [12] and the PERMIS authorisation infrastructure.
availability, reliability and security | 2006
Richard O. Sinnott; Micha Bayer; Anthony Stell; Jos Koetsier
The BRIDGES project was funded by the UK Department of Trade and Industry (DTI) to address the needs of cardiovascular research scientists investigating the genetic causes of hypertension as part of the Wellcome Trust funded (#4.34M) cardiovascular functional genomics (CFG) project. Security was at the heart of the BRIDGES project and an advanced data and compute grid infrastructure incorporating latest grid authorisation technologies was developed and delivered to the scientists. We outline these grid infrastructures and describe the perceived security requirements at the project start including data classifications and how these evolved throughout the lifetime of the project. The uptake and adoption of the project results are also presented along with the challenges that must be overcome to support the secure exchange of life science data sets. We also present how we will use the BRIDGES experiences in future projects at the National e-Science Centre.
Acta Neurochirurgica | 2010
Ian Piper; Iain Chambers; Giuseppe Citerio; Per Enblad; Barbara Gregson; Tim Howells; Karl L. Kiening; Julia Mattern; Pelle Nilsson; Arminas Ragauskas; Juan Sahuquillo; Rob Donald; Richard O. Sinnott; Anthony Stell
BackgroundThe BrainIT group works collaboratively on developing standards for collection and analyses of data from brain-injured patients and to facilitate a more efficient infrastructure for assessing new health care technology with the primary objective of improving patient care. European Community (EC) funding supported meetings over a year to discuss and define a core dataset to be collected from patients with traumatic brain injury using IT-based methods. We now present the results of a subsequent EC-funded study with the aim of testing the feasibility of collecting this core dataset across a number of European sites and discuss the future direction of this research network.MethodsOver a 3-year period, data collection client- and web-server-based tools were developed and core data (grouped into nine categories) were collected from 200 head-injured patients by local nursing staff in 22 European neuro-intensive care centres. Data were uploaded through the BrainIT website and random samples of received data were selected automatically by computer for validation by data validation staff against primary sources held in each local centre. Validated data were compared with originally transmitted data and percentage error rates calculated by data category. Feasibility was assessed in terms of the proportion of missing data, accuracy of data collected and limitations reported by users of the IT methods.FindingsThirteen percent of data files required cleaning. Thirty “one-off” demographic and clinical data elements had significant amounts of missing data (>15%). Validation staff conducted 19,461 comparisons between uploaded database data with local data sources and error rates were commonly less than or equal to 6%, the exception being the surgery data class where an unacceptably high error rate of 34% was found. Nearly 10,000 therapies were successfully recorded with start-times but approximately a third had inaccurate or missing “end-times” which limits the analysis of duration of therapy. Over 40,000 events and procedures were recorded but events with long durations (such as transfers) were more likely to have end-times missed.ConclusionsThe BrainIT core dataset is a rich dataset for hypothesis generation and post hoc analyses, provided that studies avoid known limitations in the dataset. Limitations in the current IT-based data collection tools have been identified and have been addressed. In order for multi-centre data collection projects to be viable, the resource intensive validation procedures will require a more automated process and this may include direct electronic access to hospital-based clinical data sources for both validation purposes and for minimising the duplication of data entry. This type of infrastructure may foster and facilitate the remote monitoring of patient management and protocol adherence in future trials of patient management and monitoring.
Philosophical Transactions of the Royal Society A | 2009
Anthony Stell; Richard O. Sinnott; Jipu Jiang; Rob Donald; Iain Chambers; Giuseppe Citerio; Per Enblad; Barbara Gregson; Tim Howells; Karl L. Kiening; Pelle Nilsson; Arminas Ragauskas; Juan Sahuquillo; Ian Piper
The ability to predict adverse hypotensive events, where a patients arterial blood pressure drops to abnormally low (and dangerous) levels, would be of major benefit to the fields of primary and secondary health care, and especially to the traumatic brain injury domain. A wealth of data exist in health care systems providing information on the major health indicators of patients in hospitals (blood pressure, temperature, heart rate, etc.). It is believed that if enough of these data could be drawn together and analysed in a systematic way, then a system could be built that will trigger an alarm predicting the onset of a hypotensive event over a useful time scale, e.g. half an hour in advance. In such circumstances, avoidance measures can be taken to prevent such events arising. This is the basis for the Avert-IT project (http://www.avert-it.org), a collaborative EU-funded project involving the construction of a hypotension alarm system exploiting Bayesian neural networks using techniques of data federation to bring together the relevant information for study and system development.
Annals of Clinical Biochemistry | 2014
Graeme Eisenhofer; Sebastian Brown; Mirko Peitzsch; Daniela Pelzel; Peter Lattke; Stephan Glöckner; Anthony Stell; Aleksander Prejbisz; Martin Fassnacht; Felix Beuschlein; Andrzej Januszewicz; Gabriele Siegert; Heinz Reichmann
Background Medication-related interferences with measurements of catecholamines and their metabolites represent important causes of false-positive results during diagnosis of phaeochromocytomas and paragangliomas (PPGLs). Such interferences are less troublesome with measurements by liquid chromatography with tandem mass-spectrometry (LC-MS/MS) than by other methods, but can still present problems for some drugs. Levodopa, the precursor for dopamine used in the treatment of Parkinson’s disease, represents one potentially interfering medication. Methods Plasma and urine samples, obtained from 20 Parkinsonian patients receiving levodopa, were analysed for concentrations of catecholamines and their O-methylated metabolites by LC-MS/MS. Results were compared with those from a group of 120 age-matched subjects and 18 patients with PPGLs. Results Plasma and urinary free and deconjugated (free + conjugated) methoxytyramine, as well as urinary dopamine, showed 22- to 148-fold higher (P < 0.0001) concentrations in patients receiving levodopa than in the reference group. In contrast, plasma normetanephrine, urinary noradrenaline and urinary free and deconjugated normetanephrine concentrations were unaffected. Plasma free metanephrine, urinary adrenaline and urinary free and deconjugated metanephrine all showed higher (P < 0.05) concentrations in Parkinsonian patients than the reference group, but this was only a problem for adrenaline. Similar to normetanephrine, plasma and urinary metanephrine remained below the 97.5 percentiles of the reference group in almost all Parkinsonian patients. Conclusions These data establish that although levodopa treatment confounds identification of PPGLs that produce dopamine, the therapy is not a problem for use of LC-MS/MS measurements of plasma and urinary normetanephrine and metanephrine to diagnose more commonly encountered PPGLs that produce noradrenaline or adrenaline.
Health Informatics Journal | 2008
Richard O. Sinnott; Anthony Stell; Oluwafemi O. Ajayi
A computational infrastructure to underpin complex clinical trials and medical population studies is highly desirable. This should allow access to a range of distributed clinical data sets; support the efficient processing and analysis of the data obtained; have security at its heart; and ensure that authorized individuals are able to see privileged data and no more. Each clinical trial has its own requirements on data sets and how they are used; hence a reusable and flexible framework offers many advantages. The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) is a collaborative project involving several UK universities specifically to explore this space. This article presents the experiences of developing the Scottish component of this nationwide infrastructure, by the National e-Science Centre (NeSC) based at the University of Glasgow, and the issues inherent in accessing and using the clinical data sets in a flexible, dynamic and secure manner.
cluster computing and the grid | 2005
Richard O. Sinnott; Anthony Stell; John P. Watt
The development of teaching materials for future software engineers is critical to the long term success of the grid. At present however there is considerable turmoil in the grid community both within the standards and the technology base underpinning these standards. In this context, it is especially challenging to develop teaching materials that have some sort of lifetime beyond the next wave of grid middleware and standards. In addition, the current way in which grid security is supported and delivered has two key problems. Firstly in the case of the UK e-Science community, scalability issues arise from a central certificate authority. Secondly, the current security mechanisms used by the grid community are not line grained enough. In this paper we outline how these issues are being addressed through the development of a grid computing module supported by an advanced authorisation infrastructure at the University of Glasgow.