Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ariel Alonso is active.

Publication


Featured researches published by Ariel Alonso.


Computational Statistics & Data Analysis | 2008

A family of tests to detect misspecifications in the random-effects structure of generalized linear mixed models

Ariel Alonso; Saskia Litière; Geert Molenberghs

Estimation in generalized linear mixed models for non-Gaussian longitudinal data is often based on maximum likelihood theory, which assumes that the underlying probability model is correctly specified. It is known that the results obtained from these models are not always robust against misspecification of the random-effects structure. Therefore, diagnostic tools for the detection of this misspecification are of the utmost importance. Three diagnostic tests, based on the eigenvalues of the variance-covariance matrices for the fixed-effects parameters estimates, are proposed in the present work. The power and type I error rate of these tests are studied via simulations. A very acceptable performance was observed in many cases, especially for those misspecifications that can have a big impact on the maximum likelihood estimators.


Statistical Methods in Medical Research | 2008

Evaluating time to cancer recurrence as a surrogate marker for survival from an information theory perspective

Ariel Alonso; Geert Molenberghs

The last two decades have seen a lot of development in the area of surrogate marker validation. One of these approaches places the evaluation in a meta-analytic framework, leading to definitions in terms of trial- and individual-level association. A drawback of this methodology is that different settings have led to different measures at the individual level. Using information theory, Alonso et al. proposed a unified framework, leading to a new definition of surrogacy, which offers interpretational advantages and is applicable in a wide range of situations. In this work, we illustrate how this information-theoretic approach can be used to evaluate surrogacy when both endpoints are of a time-to-event type. Two meta-analyses, in early and advanced colon cancer, respectively, are then used to evaluate the performance of time to cancer recurrence as a surrogate for overall survival.


Clinical Trials | 2007

Information-theory based surrogate marker evaluation from several randomized clinical trials with continuous true and binary surrogate endpoints

Assam Pryseley; Abel Tilahun; Ariel Alonso; Geert Molenberghs

Background Surrogate endpoints potentially reduce the duration and/or increase the amount of information available in a study, thereby diminishing patient burden and cost. They may also increase the effectiveness and reliability of research, through beneficial impact on noncompliance and missingness. Purpose In this article, we review the meta-analytic approach of Buyse et al. (2000) and its extension to mixed continuous and binary endpoints by Molenberghs Geys, and Buyse (2001). Methods An information-theoretic alternative, based on Alonso and Molenberghs (2007a) is proposed. The method is evaluated using simulations and application to data from an ophthalmologic trial, with lines of vision lost at 6 months as candidate surrogate endpoints for lines of vision lost at 12 months. The method is implemented as an R function. Results The information-theoretic approach is based on solid theory, easy to apply, and enjoys elegant properties. While the information-theoretic approach appears to be somewhat biased downwards, this is due to fact that it operates at explicitly observed outcomes, without the need for unobserved, latent scales. This is a desirable property. Limitations While easy-to-use and implement, the theoretical foundation of the information-theory approach is more mathematical. It produces some bias for small to moderate trial/center sizes, and hence is recommended primarily for sufficiently large trials. Conclusions Since the meta-analytic framework can be computationally extremely expensive, the information-theoretic approach of Alonso and Molenberghs (2007a) is a viable alternative. For the ophthalmologic case study, the conclusion is that the lines of vision lost at sixth month do have some, but not overwhelming promise as a surrogate endpoint. Clinical Trials 2007; 4: 587—597. http://ctj.sagepub.com


Statistical Methods in Medical Research | 2010

A unified framework for the evaluation of surrogate endpoints in mental-health clinical trials

Geert Molenberghs; Tomasz Burzykowski; Ariel Alonso; Pryseley Assam; Abel Tilahun; Marc Buyse

For a number of reasons, surrogate endpoints are considered instead of the so-called true endpoint in clinical studies, especially when such endpoints can be measured earlier, and/or with less burden for patient and experimenter. Surrogate endpoints may occur more frequently than their standard counterparts. For these reasons, it is not surprising that the use of surrogate endpoints in clinical practice is increasing. Building on the seminal work of Prentice1 and Freedman et al.,2 Buyse et al. 3 framed the evaluation exercise within a meta-analytic setting, in an effort to overcome difficulties that necessarily surround evaluation efforts based on a single trial. In this article, we review the meta-analytic approach for continuous outcomes, discuss extensions to non-normal and longitudinal settings, as well as proposals to unify the somewhat disparate collection of validation measures currently on the market. Implications for design and for predicting the effect of treatment in a new trial, based on the surrogate, are discussed. A case study in schizophrenia is analysed.


Expert Review of Pharmacoeconomics & Outcomes Research | 2008

Surrogate end points: hopes and perils

Ariel Alonso; Geert Molenberghs

In recent years, the cost of drug development has increased the demands on efficiency in the selection of suitable drug candidates; surrogate end points have emerged to improve this process, hoping that they can help reduce duration and cost of clinical trials. Additionally, they can help in solving ethical issues when measuring the clinical end point involves the application of risky or uncomfortable medical procedures. However, the very mention of surrogate end points has always been controversial, owing in part to unfortunate historical events. As a consequence, there is growing consensus on the use of validated surrogates only. Here, we discuss some of the validation strategies that have recently been proposed and consider the future of surrogate end points in clinical research.


Biometrics | 2010

A Unified Approach to Multi-item Reliability

Ariel Alonso; Annouschka Laenen; Geert Molenberghs; Helena Geys; Tony Vangeneugden

The reliability of multi-item scales has received a lot of attention in the psychometric literature, where a myriad of measures like the Cronbachs α or the Spearman-Brown formula have been proposed. Most of these measures, however, are based on very restrictive models that apply only to unidimensional instruments. In this article, we introduce two measures to quantify the reliability of multi-item scales based on a more general model. We show that they capture two different aspects of the reliability problem and satisfy a minimum set of intuitive properties. The relevance and complementary value of the measures is studied and earlier approaches are placed in a broader theoretical framework. Finally, we apply them to investigate the reliability of the Positive and Negative Syndrome Scale, a rating scale for the assessment of the severity of schizophrenia.


Journal of Psychiatric Research | 2009

Using longitudinal data from a clinical trial in depression to assess the reliability of its outcome scales

Annouschka Laenen; Ariel Alonso; Geert Molenberghs; Tony Vangeneugden; Craig H. Mallinckrodt

Longitudinal studies are permeating clinical trials in psychiatry. Additionally, in the same field, rating scales are frequently used to evaluate the status of the patients and the efficacy of new therapeutic procedures. Therefore, it is of utmost importance to study the psychometric properties of these instruments within a longitudinal framework. In the area of depression, the Hamilton depression rating scale (HAMD) is regularly used for antidepressant treatment evaluation. However, the use of HAMD has not been exempted from criticism what has lead to the development of new scales that are expected to be more sensitive for change, such as the Montgomery-Asberg depression rating scale (MADRS). In general, the reliability of these scales has been extensively studied by using classical methods for reliability estimation, developed for specifically designed reliability studies. Unfortunately, the settings customarily considered in these reliability studies are usually far from the practical conditions in which these scales are applied in clinical trials and practice. In the present paper, we assess the reliability of these instruments in a more realistic scenario thereby using longitudinal data coming from clinical studies. Nowadays, newly developed methodology based on an extended concept of reliability, allows us to use longitudinal data for reliability estimation. This new approach not only enables to avoid bias by offering a better control of disturbing factors but it also produces more precise estimates by taking advantage of the large sample taking sizes available in clinical trials. Further, it offers practical guidelines for an optimal use of a rating scale in order to achieve a particular level of reliability. The merits of this new approach are illustrated by applying it on two clinical trials in depression to assess the reliability of the three outcome scales, HAMD, MADRS, and the Hamilton anxiety rating scale (HAMA).


Computational Statistics & Data Analysis | 2007

Flexible surrogate marker evaluation from several randomized clinical trials with continuous endpoints, using R and SAS

Abel Tilahun; Assam Pryseley; Ariel Alonso; Geert Molenberghs

The evaluation of surrogate endpoints is thought to be first studied by Prentice, who presented a definition of a surrogate as well as a set of criteria. These criteria were later supplemented with the so-called proportion explained after notifying some drawbacks in Prentices approach. Subsequently, the evaluation exercise was framed within a meta-analytic setting, thereby overcoming difficulties that necessarily surround evaluation efforts based on a single trial. The meta-analytic approach for continuous outcomes is briefly reviewed. Advantages and problems are highlighted by means of two case studies, one in schizophrenia and one in ophthalmology, and a simulation study. One of the critical issues for the broad adoption of methodology like the one presented here is the availability of flexible implementations in standard statistical software. Generically applicable SAS macros and R functions are developed and made available to the reader.


Quality of Life Research | 2010

The Functional Living Index-Cancer: estimating its reliability based on clinical trial data

Annouschka Laenen; Ariel Alonso

PurposeThe Functional Living Index-Cancer was developed to measure quality of life in cancer trials as an adjunct to the usual clinical outcomes. The scale is considered conceptually good, since it covers a broad range of relevant aspects of quality of life, but the main criticism has been that its reliability has never been properly investigated. In this paper, we investigate the reliability of the FLIC.MethodsWe apply a new methodology based on linear mixed models that allows estimating reliability from real clinical data. The reliability of the FLIC is estimated using data coming from a longitudinal study in breast cancer. With this new approach, we avoid the need for additional data collection on which classical reliability studies are based.ResultsThe average reliability of the FLIC over the repeated measurements is satisfactory, even though the initial measurement in the study showed a somewhat lower value. Taking into account the longitudinal character of the measurements, we show that highly reliable information can be obtained with a relatively small number of measurements per patient.ConclusionThe FLIC provides reliable quality of life measurements in patients with breast cancer. Additional studies would be welcome to validate these results in other populations.


Journal of Biopharmaceutical Statistics | 2008

Information Theory–Based Surrogate Marker Evaluation from Several Randomized Clinical Trials with Binary Endpoints, Using SAS

Abel Tilahun; Assam Pryseley; Ariel Alonso; Geert Molenberghs

One of the paradigms for surrogate marker evaluation in clinical trials is based on employing data from several clinical trials: the meta-analytic approach. It was originally developed for continuous outcomes by means of the linear mixed model, but other situations are of interest. One such situation is when both outcomes are binary. Although joint models have been proposed for this setting, they are cumbersome in the sense of computationally complex and of producing validation measures that are, unlike in the Gaussian case, not of an R 2 type (Burzykowski et al., 2005). A way to put these problems to rest is by employing information theory, already applied in the continuous case (Alonso and Molenberghs, 2007). In this paper, the information-theoretic approach is applied to the case of binary surrogate and true endpoints. Its use is illustrated using a case study in acute migraine and its performance, relative to existing methods, assessed by means of a simulation study. Because the usefulness of a method critically depends, among others, on the availability of software, a SAS implementation accompanies the methodological work.

Collaboration


Dive into the Ariel Alonso's collaboration.

Top Co-Authors

Avatar

Geert Molenberghs

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge