Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fulgencio Marín-Martínez is active.

Publication


Featured researches published by Fulgencio Marín-Martínez.


Psychological Methods | 2006

Assessing heterogeneity in meta-analysis: Q statistic or I2 index?

Tania B. Huedo-Medina; Julio Sánchez-Meca; Fulgencio Marín-Martínez; Juan Botella

In meta-analysis, the usual way of assessing whether a set of single studies is homogeneous is by means of the Q test. However, the Q test only informs meta-analysts about the presence versus the absence of heterogeneity, but it does not report on the extent of such heterogeneity. Recently, the I(2) index has been proposed to quantify the degree of heterogeneity in a meta-analysis. In this article, the performances of the Q test and the confidence interval around the I(2) index are compared by means of a Monte Carlo simulation. The results show the utility of the I(2) index as a complement to the Q test, although it has the same problems of power with a small number of studies.


Psychological Methods | 2003

Effect-Size Indices for Dichotomized Outcomes in Meta-Analysis.

Julio Sánchez-Meca; Fulgencio Marín-Martínez; Salvador Chacón-Moscoso

It is very common to find meta-analyses in which some of the studies compare 2 groups on continuous dependent variables and others compare groups on dichotomized variables. Integrating all of them in a meta-analysis requires an effect-size index in the same metric that can be applied to both types of outcomes. In this article, the performance in terms of bias and sampling variance of 7 different effect-size indices for estimating the population standardized mean difference from a 2 x 2 table is examined by Monte Carlo simulation, assuming normal and nonnormal distributions. The results show good performance for 2 indices, one based on the probit transformation and the other based on the logistic distribution.


Clinical Psychology Review | 2008

Psychological treatment of obsessive–compulsive disorder: A meta-analysis ☆

Ana I. Rosa-Alcázar; Julio Sánchez-Meca; Antonia Gómez-Conesa; Fulgencio Marín-Martínez

The benefits of cognitive-behavioral treatment for obsessive-compulsive disorder (OCD) have been evidenced by several meta-analyses. However, the differential effectiveness of behavioral and cognitive approaches has shown inconclusive results. In this paper a meta-analysis on the effectiveness of psychological treatment for OCD is presented by applying random- and mixed-effects models. The literature search enabled us to identify 19 studies published between 1980 and 2006 that fulfilled our selection criteria, giving a total of 24 independent comparisons between a treated and a control group. The effect size index was the standardized mean difference in the posttest. The effect estimates for exposure with response prevention (ERP) alone (d(+)=1.127), cognitive restructuring (CR) alone (d(+)=1.090), and ERP plus CR (d(+)=0.998) were very similar, although the effect estimate for CR alone was based on only three comparisons. Therapist-guided exposure was better than therapist-assisted self-exposure, and exposure in vivo combined with exposure in imagination was better than exposure in vivo alone. The relationships of subject, methodological and extrinsic variables with effect size were also examined, and an analysis of publication bias was carried out. Finally, the implications of the results for clinical practice and for future research in this field were discussed.


Clinical Psychology Review | 2010

Psychological treatment of panic disorder with or without agoraphobia: a meta-analysis

Julio Sánchez-Meca; Ana I. Rosa-Alcázar; Fulgencio Marín-Martínez; Antonia Gómez-Conesa

Although the efficacy of psychological treatment for panic disorder (PD) with or without agoraphobia has been the subject of a great deal of research, the specific contribution of techniques such as exposure, cognitive therapy, relaxation training and breathing retraining has not yet been clearly established. This paper presents a meta-analysis applying random- and mixed-effects models to a total of 65 comparisons between a treated and a control group, obtained from 42 studies published between 1980 and 2006. The results showed that, after controlling for the methodological quality of the studies and the type of control group, the combination of exposure, relaxation training, and breathing retraining gives the most consistent evidence for treating PD. Other factors that improve the effectiveness of treatments are the inclusion of homework during the intervention and a follow-up program after it has finished. Furthermore, the treatment is more effective when the patients have no comorbid disorders and the shorter the time they have been suffering from the illness. Publication bias and several methodological factors were discarded as a threat against the validity of our results. Finally the implications of the results for clinical practice and for future research are discussed.


Behavior Research Methods | 2013

Three-level meta-analysis of dependent effect sizes

Wim Van Den Noortgate; José Antonio López-López; Fulgencio Marín-Martínez; Julio Sánchez-Meca

Although dependence in effect sizes is ubiquitous, commonly used meta-analytic methods assume independent effect sizes. We describe and illustrate three-level extensions of a mixed effects meta-analytic model that accounts for various sources of dependence within and across studies, because multilevel extensions of meta-analytic models still are not well known. We also present a three-level model for the common case where, within studies, multiple effect sizes are calculated using the same sample. Whereas this approach is relatively simple and does not require imputing values for the unknown sampling covariances, it has hardly been used, and its performance has not been empirically investigated. Therefore, we set up a simulation study, showing that also in this situation, a three-level approach yields valid results: Estimates of the treatment effects and the corresponding standard errors are unbiased.


Psychological Methods | 2008

Confidence Intervals for the Overall Effect Size in Random-Effects Meta-Analysis

Julio Sánchez-Meca; Fulgencio Marín-Martínez

One of the main objectives in meta-analysis is to estimate the overall effect size by calculating a confidence interval (CI). The usual procedure consists of assuming a standard normal distribution and a sampling variance defined as the inverse of the sum of the estimated weights of the effect sizes. But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance (tau2) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different tau2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed). The results of a Monte Carlo simulation showed that the weighted variance CI outperformed the other methods regardless of the tau2 estimator, the value of tau2, the number of studies, and the sample size.


Educational and Psychological Measurement | 2010

Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

Fulgencio Marín-Martínez; Julio Sánchez-Meca

Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a random-effects model, there are two alternative procedures for averaging independent effect sizes: Hunter and Schmidt’s estimator, which consists of weighting by sample size as an approximation to the optimal weights; and Hedges and Vevea’s estimator, which consists of weighting by an estimation of the inverse variance of each effect size. In this article, the bias and mean squared error of the two estimators were assessed via Monte Carlo simulation of meta-analyses with the standardized mean difference as the effect-size index. Hedges and Vevea’s estimator, although slightly biased, achieved the best performance in terms of the mean squared error. As the differences between the values of both estimators could be of practical relevance, Hedges and Vevea’s estimator should be selected rather than that of Hunter and Schmidt when the effect-size index is the standardized mean difference.


Behavior Research Methods | 2015

Meta-analysis of multiple outcomes: a multilevel approach

Wim Van Den Noortgate; José Antonio López-López; Fulgencio Marín-Martínez; Julio Sánchez-Meca

In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576–594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.


Quality & Quantity | 1997

Homogeneity tests in meta-analysis: a Monte Carlo comparison of statistical power and Type I error

Julio Sánchez-Meca; Fulgencio Marín-Martínez

The statistical power and Type I error rate of several homogeneity tests, usually applied in meta-analysis, are compared using Monte Carlo simulation: (1) The chi-square test applied to standardized mean differences, correlation coefficients, and Fishers r-to-Z transformations, and (2) S&H-75 (and 90 percent) procedure applied to standardized mean differences and correlation coefficients. Chi-square tests adjusted correctly Type I error rates to the nominal significance level while the S&H procedures showed higher rates; consequently, the S&H procedures presented greater statistical power. In all conditions, the statistical power was very low, particularly when the sample had few studies, small sample sizes, and presented short differences between the parametric effect sizes. Finally, the criteria for selecting homogeneity tests are discussed.


Spanish Journal of Psychology | 1999

Averaging dependent effect sizes in meta-analysis: A cautionary note about procedures

Fulgencio Marín-Martínez; Julio Sánchez-Meca

When a primary study includes several indicators of the same construct, the usual strategy to meta-analytically integrate the multiple effect sizes is to average them within the study. In this paper, the numerical and conceptual differences among three procedures for averaging dependent effect sizes are shown. The procedures are the simple arithmetic mean, the Hedges and Olkin (1985) procedure, and the Rosenthal and Rubin (1986) procedure. Whereas the simple arithmetic mean ignores the dependence among effect sizes, both the procedures by Hedges and Olkin and Rosenthal and Rubin take into account the correlational structure of the effect sizes, although in a different way. Rosenthal and Rubins procedure provides the effect size for a single composite variable made up of the multiple effect sizes, whereas Hedges and Olkins procedure presents an effect size estimate of the standard variable. The three procedures were applied to 54 conditions, where the magnitude and homogeneity of both effect sizes and correlation matrix among effect sizes were manipulated. Rosenthal and Rubins procedure showed the highest estimates, followed by the simple mean, and the Hedges and Olkin procedure, this last having the lowest estimates. These differences are not trivial in a meta-analysis, where the aims must guide the selection of one of the procedures.

Collaboration


Dive into the Fulgencio Marín-Martínez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wim Van Den Noortgate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Juan Botella

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Rosa María Núñez-Núñez

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge