Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Habing is active.

Publication


Featured researches published by Brian Habing.


Addiction | 2011

Recent status scores for version 6 of the Addiction Severity Index (ASI-6).

John S. Cacciola; Arthur I. Alterman; Brian Habing; A. Thomas McLellan

AIMS To describe the derivation of recent status scores (RSSs) for version 6 of the Addiction Severity Index (ASI-6). DESIGN 118 ASI-6 recent status items were subjected to nonparametric item response theory (NIRT) analyses followed by confirmatory factor analysis (CFA). Generalizability and concurrent validity of the derived scores were determined. SETTING AND PARTICIPANTS A total of 607 recent admissions to variety of substance abuse treatment programs constituted the derivation sample; a subset (n = 252) comprised the validity sample. MEASUREMENTS The ASI-6 interview and a validity battery of primarily self-report questionnaires that included at least one measure corresponding to each of the seven ASI domains were administered. FINDINGS Nine summary scales describing recent status that achieved or approached both high scalability and reliability were derived; one scale for each of six areas (medical, employment/finances, alcohol, drug, legal, psychiatric) and three scales for the family/social area. Intercorrelations among the RSSs also supported the multi-dimensionality of the ASI-6. Concurrent validity analyses yielded strong evidence supporting the validity of six of the RSSs (medical, alcohol, drug, employment, family/social problems, psychiatric). Evidence was weaker for the legal, family/social support and child problems RSSs. Generalizability analyses of the scales to males versus females and whites versus blacks supported the comparability of the findings, with slight exceptions. CONCLUSIONS The psychometric analyses to derive Addiction Severity Index version 6 recent status scores support the multi-dimensionality of the Addiction Severity Index version 6 (i.e. the relative independence of different life functioning areas), consistent with research on earlier editions of the instrument. In general, the Addiction Severity Index version 6 scales demonstrate acceptable scalability, reliability and concurrent validity. While questions remain about the generalizability of some scales to population subgroups, the overall findings coupled with updated and more extensive content in the Addiction Severity Index version 6 support its use in clinical practice and research.


Applied Psychological Measurement | 2007

Performance of DIMTEST-and NOHARM-Based Statistics for Testing Unidimensionality.

Holmes Finch; Brian Habing

This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees, correlations between underlying ability dimensions, skewness of underlying ability distributions, and the presence or absence of a guessing parameter. In the absence of guessing, DIMTEST and the NOHARM-based statistics had similar power, with the χ2 statistic having a very low Type I error rate. In the presence of guessing, however, two of the NOHARM-based statistics had unacceptably high Type I error rates, while the third performed similarly to DIMTEST. Given this inflated error rate, the study compares the empirical powers after adjusting for the discrepancy in Type I error rates.


Psychological Assessment | 2007

Addiction Severity Index Recent and Lifetime summary indexes based on nonparametric item response theory methods.

Arthur I. Alterman; John S. Cacciola; Brian Habing; Kevin G. Lynch

Baseline Addiction Severity Index (5th ed.; ASI-5) data of 2,142 substance abuse patients were analyzed with two nonparametric item response theory (NIRT) methods: Mokken scaling and conditional covariance techniques. Nine reliable and dimensionally homogeneous Recent Problem indexes emerged in the ASI-5s seven areas, including two each in the Employment/Support and Family/Social Relationships areas. Lifetime Problem indexes were derived for five of the areas--Medical, Drug, Alcohol, Legal, and Psychiatric--but not for the Employment/Support and Family/Social Relationships areas. Correlational analyses conducted on a subsample of 586 patients revealed the indexes for the seven areas to be largely independent. At least moderate correlations were obtained between the Recent and Lifetime indexes within each area where both existed. Concurrent validity analyses conducted on this same subsample found meaningful relationships, except for the Employment/Support area. NIRT-based methods were able to add to findings produced previously by classical psychometric methods and appear to offer promise for the psychometric analysis of complex, mixed-format instruments such as the ASI-5.


Applied Psychological Measurement | 2008

Conditional Covariance-Based Subtest Selection for DIMTEST:

Amy G. Froelich; Brian Habing

DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam lacks simple structure, the ability and difficulty parameter distributions differ greatly, or the underlying model is noncompensatory. A new method of selecting the assessment subtest for DIMTEST, based on the conditional covariance dimensionality programs DETECT and HCA/ CCPROX, is presented. Simulation studies show that using DIMTEST with this new selection method has either similar or significantly higher power to detect multidimensionality than using linear factor analysis for subtest selection, while maintaining Type I error rates around the nominal level.


Applied Psychological Measurement | 2001

Nonparametric Regression and the Parametric Bootstrap for Local Dependence Assessment.

Brian Habing

Ideas underlying nonparametric regression and the parametric bootstrap are discussed. An overview is provided of their application to item response theory and, in particular, local dependence assessment. The resulting nonparametric item response theory parametric bootstrap can remove the need to specify a particular parametric form for the item response functions and correct for the statistical bias caused by conditioning on observed test scores. The method is applied to the problem of assessing local dependence that varies with examinee trait levels. This is done by using pointwise testing bands to examine the item pair conditional covariance at each examinee trait level. The pointwise bands are used to diagnose speededness in a testing situation in which unanswered items are scored as incorrect.


Applied Psychological Measurement | 2014

Parameter Estimation of the Reduced RUM Using the EM Algorithm

Yuling Feng; Brian Habing; Alan Huebner

Diagnostic classification models (DCMs) are psychometric models widely discussed by researchers nowadays because of their promising feature of obtaining detailed information on students’ mastery on specific attributes. Model estimation is essential for further implementation of these models, and estimation methods are often developed within some general framework, such as generalized diagnostic model (GDM) of von Davier, the log-linear diagnostic classification model (LDCM), and the generalized deterministic input, noisy-and-gate (G-DINA). Using a maximum likelihood estimation algorithm, this article addresses the estimation issue of a complex compensatory DCM, the reduced reparameterized unified model (rRUM), whose estimation under general frameworks could be lengthy due to the complexity of the model. The proposed estimation method is demonstrated on simulated data as well as a real data set, and is shown to provide accurate item parameter estimates for the rRUM.


Psychometrika | 2003

On the need for negative local item dependence

Brian Habing; Louis Roussos

While negative local item dependence (LID) has been discussed in numerous articles, its occurrence and effects often go unrecognized. This is due in part to confusion over what unidimensional latent trait is being utilized in evaluating the LID of multidimensional testing data. This article addresses this confusion by using an appropriately chosen latent variable to condition on. It then provides a proof that negative LID must occur when unidimensional ability estimates (such as number right score) are obtained from data which follow a very general class of multidimensional item response theory models. The importance of specifying what unidimensional latent trait is used, and its effect on the sign of the LIDs are shown to have implications in regard to a variety of foundational theoretical arguments, to the simulation of LID data sets, and to the use of testlet scoring for removing LID.


Applied Psychological Measurement | 2005

A Q3 Statistic for Unfolding Item Response Theory Models: Assessment of Unidimensionality With Two Factors and Simple Structure

Brian Habing; Holmes Finch; James S. Roberts

Although there are many methods available for dimensionality assessment for items with monotone item response functions, there are few methods available for unfolding item response theory models. In this study, a modification of Yens Q3 statistic is proposed for the case of these nonmonotone item response models. Through a simulation study, the method demonstrates some promise for use as a test of the hypothesis of unidimensionality and local independence. A positive bias seems to occur in some cases, however. The new statistic appears to have properties that would also make it useful in the construction of dissimilarity measures for use in clustering and multidimensional scaling algorithms. A real data analysis is also provided to demonstrate how the proposed methods could be used in practice.


Journal of Religion & Health | 2016

Fatalism Revisited: Further Psychometric Testing Across Two Studies

Sue P. Heiney; Mary M. Gullatte; Pearman D. Hayne; Barbara D. Powe; Brian Habing

Cancer fatalism may impact outcomes, particularly for African American (AA) women with breast cancer (BrCa). We examined the psychometrics of the modified Powe Fatalism Inventory in sample of AA women with BrCa from two studies. Only the predetermination and God’s will items satisfy the conditions to be classified as a strong subscale. Our analysis identified that five items had strong psychometric properties for measuring fatalism for AA women with BrCa. However, these items do not include all the defining attributes of fatalism. A strong measure of fatalism strengthens our understanding of how this concept influences AA patient outcomes.


Journal of the American Medical Informatics Association | 2018

Exploring app features with outcomes in mHealth studies involving chronic respiratory diseases, diabetes, and hypertension: a targeted exploration of the literature

Sara Donevant; Robin Dawson Estrada; Joan M. Culley; Brian Habing; Swann Arp Adams

Objectives Limited data are available on the correlation of mHealth features and statistically significant outcomes. We sought to identify and analyze: types and categories of features; frequency and number of features; and relationship of statistically significant outcomes by type, frequency, and number of features. Materials and Methods This search included primary articles focused on app-based interventions in managing chronic respiratory diseases, diabetes, and hypertension. The initial search yielded 3622 studies with 70 studies meeting the inclusion criteria. We used thematic analysis to identify 9 features within the studies. Results Employing existing terminology, we classified the 9 features as passive or interactive. Passive features included: 1) one-way communication; 2) mobile diary; 3) Bluetooth technology; and 4) reminders. Interactive features included: 1) interactive prompts; 2) upload of biometric measurements; 3) action treatment plan/personalized health goals; 4) 2-way communication; and 5) clinical decision support system. Discussion Each feature was included in only one-third of the studies with a mean of 2.6 mHealth features per study. Studies with statistically significant outcomes used a higher combination of passive and interactive features (69%). In contrast, studies without statistically significant outcomes exclusively used a higher frequency of passive features (46%). Inclusion of behavior change features (ie, plan/goals and mobile diary) were correlated with a higher incident of statistically significant outcomes (100%, 77%). Conclusion This exploration is the first step in identifying how types and categories of features impact outcomes. While the findings are inconclusive due to lack of homogeneity, this provides a foundation for future feature analysis.

Collaboration


Dive into the Brian Habing's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John S. Cacciola

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin G. Lynch

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megan Ivey

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Roland C. Deutsch

University of North Carolina at Greensboro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge