Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cindy M. Walker is active.

Publication


Featured researches published by Cindy M. Walker.


Medical Decision Making | 2012

The numeracy understanding in medicine instrument: a measure of health numeracy developed using item response theory.

Marilyn M. Schapira; Cindy M. Walker; Kevin J. Cappaert; Pamela Ganschow; Kathlyn E. Fletcher; Emily L. McGinley; Sam Del Pozo; Carrie Schauer; Sergey Tarima; Elizabeth A. Jacobs

Background: Health numeracy can be defined as the ability to understand and apply information conveyed with numbers, tables and graphs, probabilities, and statistics to effectively communicate with health care providers, take care of one’s health, and participate in medical decisions. Objective: To develop the Numeracy Understanding in Medicine Instrument (NUMi) using item response theory scaling methods. Design: A 20-item test was formed drawing from an item bank of numeracy questions. Items were calibrated using responses from 1000 participants and a 2-parameter item response theory model. Construct validity was assessed by comparing scores on the NUMi to established measures of print and numeric health literacy, mathematic achievement, and cognitive aptitude. Participants: Community and clinical populations in the Milwaukee and Chicago metropolitan areas. Results: Twenty-nine percent of the 1000 respondents were Hispanic, 24% were non-Hispanic white, and 42% were non-Hispanic black. Forty-one percent had no more than a high school education. The mean score on the NUMi was 13.2 (s = 4.6) with a Cronbach α of 0.86. Difficulty and discrimination item response theory parameters of the 20 items ranged from −1.70 to 1.45 and 0.39 to 1.98, respectively. Performance on the NUMi was strongly correlated with the Wide Range Achievement Test–Arithmetic (0.73, P < 0.001), the Lipkus Expanded Numeracy Scale (0.69, P < 0.001), the Medical Data Interpretation Test (0.75, P < 0.001), and the Wonderlic Cognitive Ability Test (0.82, P < 0.001). Performance was moderately correlated to the Short Test of Functional Health Literacy (0.43, P < 0.001). Limitations: The NUMi was found to be most discriminating among respondents with a lower-than-average level of health numeracy. Conclusions: The NUMi can be applied in research and clinical settings as a robust measure of the health numeracy construct.


Applied Measurement in Education | 2008

Using a Multidimensional Differential Item Functioning Framework to Determine if Reading Ability Affects Student Performance in Mathematics

Cindy M. Walker; Bo Zhang; John R. Surber

Many teachers and curriculum specialists claim that the reading demand of many mathematics items is so great that students do not perform well on mathematics tests, even though they have a good understanding of mathematics. The purpose of this research was to test this claim empirically. This analysis was accomplished by considering examinees that differed in reading ability within the context of a multidimensional DIF framework. Results indicated that student performance on some mathematics items was influenced by their level of reading ability so that examinees with lower proficiency classifications in reading were less likely to obtain correct answers to these items. This finding suggests that incorrect proficiency classifications may have occurred for some examinees. However, it is argued that rather than eliminating these mathematics items from the test, which would seem to decrease the construct validity of the test, attempts should be made to control the confounding effect of reading that is measured by some of the mathematics items.


Patient Education and Counseling | 2009

Evaluating existing measures of health numeracy using item response theory.

Marilyn M. Schapira; Cindy M. Walker; Sonya K. Sedivy

OBJECTIVE To evaluate existing measures of health numeracy using item response theory (IRT). METHODS A cross-sectional study was conducted. Participants completed assessments of health numeracy measures including the Lipkus expanded health numeracy scale (Lipkus), and the Medical Data Interpretation Test (MDIT). The Lipkus and MDIT were scaled with IRT utilizing the two-parameter logistic model. RESULTS Three-hundred and fifty-nine (359) participants were surveyed. Classical test theory parameters and IRT scaling parameters of the numeracy measures found most items to be at least moderately discriminating. Modified versions of the Lipkus and MDIT were scaled after eliminating items with low discrimination, high difficulty parameters, and poor model fit. The modified versions demonstrated a good range of discrimination and difficulty as indicated by the test information functions. CONCLUSION An IRT analysis of the Lipkus and MDIT indicate that both health numeracy scales discriminate well across a range of ability. PRACTICE IMPLICATIONS Health numeracy skills are needed in order for patients to successfully participate in their medical care. The accurate assessment of health numeracy may help health care providers to tailor patient education interventions to the patients level of understanding and ability. Item response theory scaling methods can be used to evaluate the discrimination and difficulty of individual items as well as the overall assessment.


Educational and Psychological Measurement | 2006

Statistical Versus Substantive Dimensionality The Effect of Distributional Differences on Dimensionality Assessment Using DIMTEST

Cindy M. Walker; Razia Azen; Thomas Schmitt

It is believed by some that most tests are multidimensional, meaning that they measure more than one underlying construct. The primary objective of this study is to illustrate how variations in the secondary ability distribution affect the statistical detection of dimensionality and to demonstrate the difference between substantive and statistical dimensionality. Given dichotomous data simulated to be multidimensional, this study shows how varying the ability distributions affects the results obtained from DIMTEST, a nonparametric statistical procedure based on the theory of essential unidimensionality. Results indicate that the power of DIMTEST decreased as the mean of the secondary ability distribution approached the extremes and/or as the standard deviation of the secondary ability distribution approached zero. This has important implications for both researchers and practitioners because although a test may measure additional dimensions from a substantive viewpoint, these dimensions may not be detected statistically.


Applied Measurement in Education | 2008

Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

Daniel A. Sass; Thomas A. Schmitt; Cindy M. Walker

Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal conditions using four latent trait estimation procedures and also evaluated whether the test composition, in terms of item difficulty level, reduces estimation error. Most importantly, both true and estimated item parameters were examined to disentangle the effects of latent trait estimation error from item parameter estimation error. Results revealed that non-normal latent trait distributions produced a considerably larger degree of latent trait estimation error than normal data. Estimated item parameters tended to have comparable precision to true item parameters, thus suggesting that increased latent trait estimation error results from latent trait estimation rather than item parameter estimation.


Journal of Health Communication | 2014

Development and Validation of the Numeracy Understanding in Medicine Instrument Short Form

Marilyn M. Schapira; Cindy M. Walker; Tamara Miller; Kathlyn E. Fletcher; Pamela Ganschow; Elizabeth A. Jacobs; Diana Imbert; Maria O'Connell; Joan M. Neuner

Health numeracy can be defined as the ability to understand and use numeric information and quantitative concepts in the context of health. The authors previously reported the development of the Numeracy Understanding in Medicine Instrument (NUMi), a 20-item test developed using item response theory. The authors now report the development and validation of a short form of the NUMi. Item statistics were used to identify a subset of 8 items representing a range of difficulty and content areas. Internal reliability was evaluated with Cronbachs alpha. Divergent and convergent validity was assessed by comparing scores of the S-NUMI with existing measures of education, print and numeric health literacy, mathematic achievement, cognitive reasoning, and the original NUMi. The 8-item scale had adequate reliability (α = .72) and was strongly correlated to the 20-item NUMi (α = .92). S-NUMi scores were strongly correlated with the Lipkus Expanded Health Numeracy Scale (α = .62), the Wide Range of Achievement Test-Mathematics (α = .72), and the Wonderlic Cognitive Ability Test (α = .76). Moderate correlation was found with education level (α = .58) and print literacy as measured by the Test of Functional Health Literacy in Adults (α = .49). Results show that the short form of the NUMi is a reliable and valid measure of health numeracy feasible for use in clinical and research settings.


Applied Psychological Measurement | 2008

Impact of Missing Data on Person—Model Fit and Person Trait Estimation

Bo Zhang; Cindy M. Walker

The purpose of this research was to examine the effects of missing data on person—model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation, and model-based imputation. Person traits were estimated using the two-parameter item response model. Overall, missing data increased the difficulty in assessing person—model fit for both model-fitting and model-misfitting persons. The higher the proportion of missing data, the larger the number of persons incorrectly diagnosed. Among the four techniques, the pairwise deletion method performed best in recovering person—model fit and person trait level. Treating missing responses as incorrect caused the examinees with missing data to not fit the measurement model, thus invalidating the person trait estimates.


Journal of General Internal Medicine | 2011

The Meaning of Numbers in Health: Exploring Health Numeracy in a Mexican-American Population

Marilyn M. Schapira; Kathlyn E. Fletcher; Pamela Ganschow; Cindy M. Walker; Bruce Tyler; Sam Del Pozo; Carrie Schauer; Elizabeth A. Jacobs

ABSTRACTBACKGROUNDHealth numeracy can be defined as the ability to use numeric information in the context of health. The interpretation and application of numbers in health may vary across cultural groups.OBJECTIVETo explore the construct of health numeracy among persons who identify as Mexican American.DESIGNQualitative focus group study. Groups were stratified by preferred language and level of education. Audio-recordings were transcribed and Spanish groups (n = 3) translated to English. An analysis was conducted using principles of grounded theory.PARTICIPANTSA purposeful sample of participants from clinical and community sites in the Milwaukee and Chicago metropolitan areas.MAIN MEASURESA theoretical framework of health numeracy was developed based upon categories and major themes that emerged from the analysis.KEY RESULTSSix focus groups were conducted with 50 participants. Initial agreement in coding was 59–67% with 100% reached after reconciliation by the coding team. Three major themes emerged: 1) numeracy skills are applied to a broad range of communication and decision making tasks in health, 2) affective and cognitive responses to numeric information influence use of numbers in the health setting, and 3) there exists a strong desire to understand the meaning behind numbers used in health. The findings informed a theoretical framework of health numeracy.CONCLUSIONSNumbers are important across a range of skills and applications in health in a sample of an urban Mexican-American population. This study expands previous work that strives to understand the application of numeric skills to medical decision making and health behaviors.


Educational and Psychological Measurement | 2012

Establishing Effect Size Guidelines for Interpreting the Results of Differential Bundle Functioning Analyses Using SIBTEST

Cindy M. Walker; Bo Zhang; Kathleen Banks; Kevin J. Cappaert

The purpose of this simulation study was to establish general effect size guidelines for interpreting the results of differential bundle functioning (DBF) analyses using simultaneous item bias test (SIBTEST). Three factors were manipulated: number of items in a bundle, test length, and magnitude of uniform differential item functioning (DIF) against the focal group in each item in a bundle. A secondary purpose was to validate the current effect size guidelines for interpreting the results of single-item DIF analyses using SIBTEST. The results of this study clearly demonstrate that ability estimation bias can only be attributed to DIF or DBF when a large number of items in a bundle are functioning differentially against focal examinees in a small way or a small number of items are functioning differentially against focal examinees in a large way. In either of these situations, the presence of DIF or DBF should be a cause for concern because it would lead one to erroneously believe that distinct groups differ in ability when in fact they do not.


Applied Measurement in Education | 2001

An Examination of Conditioning Variables Used in Computer Adaptive Testing for DIF Analyses.

Cindy M. Walker; S. Natasha Beretvas; Terry A. Ackerman

With the use of computerized adaptive testing becoming increasingly popular, techniques that allow one to test for differential item functioning (DIF) when not all examinees take the same items, or even the same number of items, are being developed. Roussos (1996) developed a program known as CATSIB that allows one to identify items that exhibit DIF using an examinees estimate of ability as the conditioning variable, rather than total test score. CATSIB employs what is commonly referred to as a regression correction to control for estimation bias, and inflated Type I errors that occur when the reference and focal groups differ in their observed score distributions. Simulation studies were conducted that compare the power and Type I error rates for 2 conditions: using an examinees ability estimate as the conditioning variable (CATSIB), and either employing the regression correction or not. In addition, power and Type I error rates were examined when using total test score as the conditioning variable (SIBTEST). Type I error rates were examined both for DIF-free items, when other items within the test displayed DIF (local Type 1 error) and when no DIF was present in the data (global Type 1 error). For each of these cases, 3 different types of DIF were explored: uniform, ordinal, and disordinal.

Collaboration


Dive into the Cindy M. Walker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth A. Jacobs

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kathlyn E. Fletcher

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Pamela Ganschow

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar

S. Natasha Beretvas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Bo Zhang

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Joan M. Neuner

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Carrie Schauer

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Daniel A. Sass

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Diana Imbert

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge