Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Salvador Chacón-Moscoso is active.

Publication


Featured researches published by Salvador Chacón-Moscoso.


Psychological Methods | 2003

Effect-Size Indices for Dichotomized Outcomes in Meta-Analysis.

Julio Sánchez-Meca; Fulgencio Marín-Martínez; Salvador Chacón-Moscoso

It is very common to find meta-analyses in which some of the studies compare 2 groups on continuous dependent variables and others compare groups on dichotomized variables. Integrating all of them in a meta-analysis requires an effect-size index in the same metric that can be applied to both types of outcomes. In this article, the performance in terms of bias and sampling variance of 7 different effect-size indices for estimating the population standardized mean difference from a 2 x 2 table is examined by Monte Carlo simulation, assuming normal and nonnormal distributions. The results show good performance for 2 indices, one based on the probit transformation and the other based on the logistic distribution.


Evaluation | 2005

Evidence-based Decision Making: Enhancing Systematic Reviews of Program Evaluation Results in Europe

William R. Shadish; Salvador Chacón-Moscoso; Julio Sánchez-Meca

Over the last 25 years, meta-analysis has been widely used to study the effects of practical treatment interventions in fields ranging from psychology and education to medicine and public health. The present article first describes the impact of meta-analysis. The article then presents a description of some preliminary results of a study of the main design characteristics of published interventions in Europe. In order to foster both better experimental designs and more systematic reviews in the European context, and to promote collaboration between different countries and research groups, the authors then describe and discuss the Campbell Collaboration (C2), a new international organization aimed at fostering public policies and practices based on systematic reviews of high-quality evidence. Though there is some overlap between the work of the Campbell Collaboration and its sibling organization the Cochrane Collaboration, Campbell focuses more on systematic reviews in the social, behavioral and educational areas. Reviewers in those areas encounter difficulties over the use of non-randomized designs, difficulties in effect-size computation, and the need for new computer software to better serve the needs of meta-analytic reviews in the social sciences.


Psicothema | 2015

Guidelines for reporting evaluations based on observational methodology

Mariona Portell; María Teresa Anguera; Salvador Chacón-Moscoso; Susana Sanduvete-Chaves

BACKGROUND Observational methodology is one of the most suitable research designs for evaluating fidelity of implementation, especially in complex interventions. However, the conduct and reporting of observational studies is hampered by the absence of specific guidelines, such as those that exist for other evaluation designs. This lack of specific guidance poses a threat to the quality and transparency of these studies and also constitutes a considerable publication hurdle. The aim of this study thus was to draw up a set of proposed guidelines for reporting evaluations based on observational methodology. METHOD The guidelines were developed by triangulating three sources of information: observational studies performed in different fields by experts in observational methodology, reporting guidelines for general studies and studies with similar designs to observational studies, and proposals from experts in observational methodology at scientific meetings. RESULTS We produced a list of guidelines grouped into three domains: intervention and expected outcomes, methods, and results. CONCLUSIONS The result is a useful, carefully crafted set of simple guidelines for conducting and reporting observational studies in the field of program evaluation.


International Journal of Clinical and Health Psychology | 2015

Factor structure of the Spanish version of the Life Orientation Test-Revised (LOT-R): Testing several models

Francisco J. Cano-García; Susana Sanduvete-Chaves; Salvador Chacón-Moscoso; Luis Rodríguez-Franco; Jesús García-Martínez; María ángeles Antuña-Bellerín; José Antonio Pérez-Gil

Studies of the dimensionality of the Life Orientation Test-Revised (LOT-R), considered as the gold standard in the measurement of dispositional optimism, yield controversial results due to the various factorial solutions found. Consequently, the factorial structure of the test has not yet been fully established. The aim of this study is to determine the factorial structure of the LOT-R by comparing seven previous models and their empirical evidence. The test was administered to 906 Spanish participants, ages 18 to 61 (mean age: 23; 56% males). Confirmatory factor analyses were conducted using polychoric correlations. Considering the theoretical background and the best model fit indices (RMSEA=.038; CFI=.98), we conclude that the test presents a factorial structure of a second-order factor (life orientation) composed of two factors (optimism and pessimism). Thus, we recommend using a single global score that could be referred to as life orientation but which ultimately represents the level of dispositional optimism.


Frontiers in Psychology | 2018

Indirect Observation in Everyday Contexts: Concepts and Methodological Guidelines within a Mixed Methods Framework

M. Teresa Anguera; Mariona Portell; Salvador Chacón-Moscoso; Susana Sanduvete-Chaves

Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.


Frontiers in Psychology | 2016

Analyzing Two-Phase Single-Case Data with Non-overlap and Mean Difference Indices: Illustration, Software Tools, and Alternatives

Rumen Manolov; José Luis Losada; Salvador Chacón-Moscoso; Susana Sanduvete-Chaves

Two-phase single-case designs, including baseline evaluation followed by an intervention, represent the most clinically straightforward option for combining professional practice and research. However, unless they are part of a multiple-baseline schedule, such designs do not allow demonstrating a causal relation between the intervention and the behavior. Although the statistical options reviewed here cannot help overcoming this methodological limitation, we aim to make practitioners and applied researchers aware of the available appropriate options for extracting maximum information from the data. In the current paper, we suggest that the evaluation of behavioral change should include visual and quantitative analyses, complementing the substantive criteria regarding the practical importance of the behavioral change. Specifically, we emphasize the need to use structured criteria for visual analysis, such as the ones summarized in the What Works Clearinghouse Standards, especially if such criteria are complemented by visual aids, as illustrated here. For quantitative analysis, we focus on the non-overlap of all pairs and the slope and level change procedure, as they offer straightforward information and have shown reasonable performance. An illustration is provided of the use of these three pieces of information: visual, quantitative, and substantive. To make the use of visual and quantitative analysis feasible, open source software is referred to and demonstrated. In order to provide practitioners and applied researchers with a more complete guide, several analytical alternatives are commented on pointing out the situations (aims, data patterns) for which these are potentially useful.


Frontiers in Psychology | 2016

The development of a checklist to enhance methodological quality in intervention programs

Salvador Chacón-Moscoso; Susana Sanduvete-Chaves; Milagrosa Sánchez-Martín

The methodological quality of primary studies is an important issue when performing meta-analyses or systematic reviews. Nevertheless, there are no clear criteria for how methodological quality should be analyzed. Controversies emerge when considering the various theoretical and empirical definitions, especially in relation to three interrelated problems: the lack of representativeness, utility, and feasibility. In this article, we (a) systematize and summarize the available literature about methodological quality in primary studies; (b) propose a specific, parsimonious, 12-items checklist to empirically define the methodological quality of primary studies based on a content validity study; and (c) present an inter-coder reliability study for the resulting 12-items. This paper provides a precise and rigorous description of the development of this checklist, highlighting the clearly specified criteria for the inclusion of items and a substantial inter-coder agreement in the different items. Rather than simply proposing another checklist, however, it then argues that the list constitutes an assessment tool with respect to the representativeness, utility, and feasibility of the most frequent methodological quality items in the literature, one that provides practitioners and researchers with clear criteria for choosing items that may be adequate to their needs. We propose individual methodological features as indicators of quality, arguing that these need to be taken into account when designing, implementing, or evaluating an intervention program. This enhances methodological quality of intervention programs and fosters the cumulative knowledge based on meta-analyses of these interventions. Future development of the checklist is discussed.


Frontiers in Psychology | 2017

Evaluation of a psychological intervention for patients with chronic pain in primary care

Francisco J. Cano-García; María del Carmen González-Ortega; Susana Sanduvete-Chaves; Salvador Chacón-Moscoso; Roberto Moreno-Borrego

According to evidence from recent decades, multicomponent programs of psychological intervention in people with chronic pain have reached the highest levels of efficacy. However, there are still many questions left to answer since efficacy has mainly been shown among upper-middle class patients in English-speaking countries and in controlled studies, with expert professionals guiding the intervention and with a limited number of domains of painful experience evaluated. For this study, a program of multicomponent psychological intervention was implemented: (a) based on techniques with empirical evidence, but developed in Spain; (b) at a public primary care center; (c) among patients with limited financial resources and lower education; (d) by a novice psychologist; and (e) evaluating all domains of painful experience using the instruments recommended by the Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT). The aim of this study was to evaluate this program. We selected a consecutive sample of 40 patients treated for chronic non-cancer pain at a primary care center in Utrera (Seville, Spain), adults who were not in any employment dispute, not suffering from psychopathology, and not receiving psychological treatment. The patients participated in 10 psychological intervention sessions, one per week, in groups of 13–14 people, which addressed psychoeducation for pain; breathing and relaxation; attention management; cognitive restructuring; problem-solving; emotional management; social skills; life values and goal setting; time organization and behavioral activation; physical exercise promotion; postural and sleep hygiene; and relapse prevention. In addition to the initial assessment, measures were taken after the intervention and at a 6-month follow-up. We assessed the program throughout the process: before, during and after the implementation. Results were analyzed statistically (significance and effect size) and from a clinical perspective (clinical significance according to IMMPACT standards). According to this analysis, the intervention was successful, although improvement tended to decline at follow-up, and the detailed design gave the program assessment a high degree of standardization and specification. Finally, suggestions for improvement are presented for upcoming applications of the program.


Frontiers in Psychology | 2016

A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

Fco. P. Holgado-Tello; Salvador Chacón-Moscoso; Susana Sanduvete-Chaves; José Antonio Pérez-Gil

The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.


Frontiers in Psychology | 2018

Preliminary Checklist for Reporting Observational Studies in Sports Areas: Content Validity

Salvador Chacón-Moscoso; Susana Sanduvete-Chaves; M. Teresa Anguera; José Luis Losada; Mariona Portell; José A. Lozano-Lozano

Observational studies are based on systematic observation, understood as an organized recording and quantification of behavior in its natural context. Applied to the specific area of sports, observational studies present advantages when comparing studies based on other designs, such as the flexibility for adapting to different contexts and the possibility of using non-standardized instruments as well as a high degree of development in specific software and data analysis. Although the importance and usefulness of sports-related observational studies have been widely shown, there is no checklist to report these studies. Consequently, authors do not have a guide to follow in order to include all of the important elements in an observational study in sports areas, and reviewers do not have a reference tool for assessing this type of work. To resolve these issues, this article aims to develop a checklist to measure the quality of sports-related observational studies based on a content validity study. The participants were 22 judges with at least 3 years of experience in observational studies, sports areas, and methodology. They evaluated a list of 60 items systematically selected and classified into 12 dimensions. They were asked to score four aspects of each item on 5-point Likert scales to measure the following dimensions: representativeness, relevance, utility, and feasibility. The judges also had an open-format section for comments. The Osterlind index was calculated for each item and for each of the four aspects. Items were considered appropriate when obtaining a score of at least 0.5 in the four assessed aspects. After considering these inclusion criteria and all of the open-format comments, the resultant checklist consisted of 54 items grouped into the same initial 12 dimensions. Finally, we highlight the strengths of this work. We also present its main limitation: the need to apply the resultant checklist to obtain data and, thus, increase quality indicators of its psychometric properties. For this reason, as relevant actions for further development, we encourage expert readers to use it and provide feedback; we plan to apply it to different sport areas.

Collaboration


Dive into the Salvador Chacón-Moscoso's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco Pablo Holgado-Tello

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Enrique Vila-Abad

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar

Isabel Barbero-García

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mariona Portell

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge