Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony R. Artino is active.

Publication


Featured researches published by Anthony R. Artino.


Journal of Graduate Medical Education | 2013

Analyzing and Interpreting Data From Likert-Type Scales

Gail M. Sullivan; Anthony R. Artino

Likert-type scales are frequently used in medical education and medical education research. Common uses include end-of-rotation trainee feedback, faculty evaluations of trainees, and assessment of performance after an educational intervention. A sizable percentage of the educational research manuscripts submitted to the Journal of Graduate Medical Education employ a Likert scale for part or all of the outcome assessments. Thus, understanding the interpretation and analysis of data derived from Likert scales is imperative for those working in medical education and education research. The goal of this article is to provide readers who do not have extensive statistics background with the basics needed to understand these concepts. Developed in 1932 by Rensis Likert1 to measure attitudes, the typical Likert scale is a 5- or 7-point ordinal scale used by respondents to rate the degree to which they agree or disagree with a statement (table). In an ordinal scale, responses can be rated or ranked, but the distance between responses is not measurable. Thus, the differences between “always,” “often,” and “sometimes” on a frequency response Likert scale are not necessarily equal. In other words, one cannot assume that the difference between responses is equidistant even though the numbers assigned to those responses are. This is in contrast to interval data, in which the difference between responses can be calculated and the numbers do refer to a measureable “something.” An example of interval data would be numbers of procedures done per resident: a score of 3 means the resident has conducted 3 procedures. Interestingly, with computer technology, survey designers can create continuous measure scales that do provide interval responses as an alternative to a Likert scale. The various continuous measures for pain are well-known examples of this (figure 1). FIGURE 1 Continuous Measure Example TABLE Typical Likert Scales


Medical Teacher | 2011

Situativity theory: A perspective on how participants and the environment can interact: AMEE Guide no. 52

Steven J. Durning; Anthony R. Artino

Situativity theory refers to theoretical frameworks which argue that knowledge, thinking, and learning are situated (or located) in experience. The importance of context to these theories is paramount, including the unique contribution of the environment to knowledge, thinking, and learning; indeed, they argue that knowledge, thinking, and learning cannot be separated from (they are dependent upon) context. Situativity theory includes situated cognition, situated learning, ecological psychology, and distributed cognition. In this Guide, we first outline key tenets of situativity theory and then compare situativity theory to information processing theory; we suspect that the reader may be quite familiar with the latter, which has prevailed in medical education research. Contrasting situativity theory with information processing theory also serves to highlight some unique potential contributions of situativity theory to work in medical education. Further, we discuss each of these situativity theories and then relate the theories to the clinical context. Examples and illustrations for each of the theories are used throughout. We will conclude with some potential considerations for future exploration. Some implications of situativity theory include: a new way of approaching knowledge and how experience and the environment impact knowledge, thinking, and learning; recognizing that the situativity framework can be a useful tool to “diagnose” the teaching or clinical event; the notion that increasing individual responsibility and participation in a community (i.e., increasing “belonging”) is essential to learning; understanding that the teaching and clinical environment can be complex (i.e., non-linear and multi-level); recognizing that explicit attention to how participants in a group interact with each other (not only with the teacher) and how the associated learning artifacts, such as computers, can meaningfully impact learning.


Medical Education | 2010

Second-year medical students' motivational beliefs, emotions, and achievement.

Anthony R. Artino; Jeffery S La Rochelle; Steven J. Durning

Medical Education 2010: 44: 1203–1212


Medical Education | 2011

Context and clinical reasoning: understanding the perspective of the expert's voice.

Steven J. Durning; Anthony R. Artino; Louis N. Pangaro; Cees van der Vleuten; Lambert Schuwirth

Medical Education 2011: 45: 927–938


Academic Medicine | 2010

Perspective: Redefining Context in the Clinical Encounter: Implications for Research and Training in Medical Education

Steven J. Durning; Anthony R. Artino; Louis N. Pangaro; Cees van der Vleuten; Lambert Schuwirth

Physician training and practice occur in complex environments. These complex environments, or contexts, raise important challenges and opportunities for research and training in medical education. The authors explore how studies from fields outside medicine can assist medical educators with their approach to the notion of context in the clinical encounter. First, they discuss the use of the term context in the clinical encounter as it relates to medical education. They then detail the meaning and use of the term in diverse fields outside medicine, such as mathematics, physics, and psychology, all of which suggest a nonlinear approach to the notion of context. Next, the authors highlight two inclusive theories, situated cognition and ecological psychology, that propose factors that relate to context and that suggest some potential next steps for research and practice. By redefining context as it relates to the clinical encounter (by linking it to theory and research from several diverse fields), the authors hope to move the field forward by providing guidance for the theory, research, and practice of medical education.


Journal of Graduate Medical Education | 2013

What Do Our Respondents Think We're Asking? Using Cognitive Interviewing to Improve Medical Education Surveys.

Gordon B. Willis; Anthony R. Artino

Consider the last time you answered a questionnaire. Did it contain questions that were vague or hard to understand? If yes, did you answer these questions anyway, unsure if your interpretation aligned with what the survey developer was thinking? By the time you finished the survey, you were probably annoyed by the unclear nature of the task you had just completed. If any of this sounds familiar, you are not alone, as these types of communication failures are commonplace in questionnaires.1–,3 And if you consider how often questionnaires are used in medical education for evaluation and educational research, it is clear that the problems described above have important implications for the field. Fortunately, confusing survey questions can be avoided when survey developers use established survey design procedures. In 2 recent Journal of Graduate Medical Education editorials,4,5 the authors encouraged graduate medical education (GME) educators and researchers to use more systematic and rigorous survey design processes. Specifically, the authors proposed a 6-step decision process for questionnaire designers to use. In this article, we expand on that effort by considering the fifth of the 6 decision steps, specifically, the following question: “Will my respondents interpret my items in the manner that I intended?” To address this question, we describe in detail a critical, yet largely unfamiliar, step in the survey design process: cognitive interviewing. Questionnaires are regularly used to investigate topics in medical education research, and it may seem a straightforward process to script standardized survey questions. However, a large body of evidence demonstrates that items the researchers thought to be perfectly clear are often subject to significant misinterpretation, or otherwise fail to measure what was intended.1,2 For instance, abstract terms like “health professional” tend to conjure up a wide range of interpretations that may depart markedly from those the questionnaire designer had in mind. In this example, survey respondents may choose to include or exclude marriage counselors, yoga instructors, dental hygienists, medical office receptionists, and so on, in their own conceptions of “health professional.” At the same time, terms that are precise but technical in nature can produce unintended interpretations; for example, a survey question about “receiving a dental sealant” could be misinterpreted by a survey respondent as “getting a filling.”2 The method we describe here, termed “cognitive interviewing” or “cognitive testing,” is an evidence-based, qualitative method specifically designed to investigate whether a survey question—whether attitudinal, behavioral, or factual in nature—fulfills its intended purpose (B O X). The method relies on interviews with individuals who are specifically recruited. These individuals are presented with survey questions in much the same way as survey respondents will be administered the final draft of the questionnaire. Cognitive interviews are conducted before data collection (pretesting), during data collection, or even after the survey has been administered, as a quality assurance procedure. During the 1980s, cognitive interviewing grew out of the field of experimental psychology; common definitions of cognitive interviewing reflect those origins and emphasis. For example, Willis6 states, “Cognitive interviewing is a psychologically oriented method for empirically studying the way in which individuals mentally process and respond to survey questionnaires.” For its theoretical underpinning, cognitive interviewing has traditionally relied upon the 4-stage cognitive model introduced by Tourangeau.7 This model describes the survey response process as involving (1) comprehension, (2) retrieval of information, (3) judgment or estimation, and (4) selection of a response to the question. For example, mental processing of the question “In the past year, how many times have you participated in a formal educational program?” presumably requires a respondent to comprehend and interpret critical terms and phrases (eg, “in the past year” and “formal educational program”); to recall the correct answer; to decide to report an accurate number (rather than, for example, providing a higher value); and then to produce an answer that matches the survey requirements (eg, reporting “5 times” rather than “frequently”). Most often, comprehension problems dominate. For example, it may be found that the term “formal educational program” is variably interpreted. In other words, respondents may be unsure which activities to count and, furthermore, may not know what type of participation is being asked about (eg, participation as a student, teacher, or administrator). More recently, cognitive interviewing has to some extent been reconceptualized as a sociological/anthropological endeavor, in that it emphasizes not only the individualistic mental processing of survey items but also the background social context that may influence how well questions meaningfully capture the life of the respondent.8 Especially as surveys increasingly reflect a range of environments and cultures that may differ widely, this viewpoint has become increasingly popular. From this perspective, it is worth considering that the nature of medical education may vary across countries and medical systems, such that the definition of a term as seemingly simple as “graduate medical education” might itself lack uniformity.


Academic Medicine | 2012

Achievement Goal Structures and Self-Regulated Learning: Relationships and Changes in Medical School

Anthony R. Artino; Ting Dong; Kent J. DeZee; William R. Gilliland; Donna M. Waechter; David F. Cruess; Steven J. Durning

Purpose Practicing physicians have a societal obligation to maintain their competence. Unfortunately, the self-regulated learning skills likely required for lifelong learning are not explicitly addressed in most medical schools. The authors examined how medical students’ perceptions of the learning environment relate to their self-regulated learning behaviors. They also explored how students’ perceptions and behaviors correlate with performance and change across medical school. Method The authors collected survey data from 304 students at different phases of medical school training. The survey items assessed students’ perceptions of the learning environment, as well as their metacognition, procrastination, and avoidance-of-help-seeking behaviors. The authors operationalized achievement as cumulative medical school grade point average (GPA) and, for third- and fourth-year students, collected clerkship outcomes. Results Students’ perceptions of the learning environment were associated with their metacognition, procrastination, and help-avoidance behaviors. These behaviors were also related to academic outcomes. Specifically, avoidance of help seeking was negatively correlated with cumulative medical school GPA (r = −0.23, P < .01) as well as exam (r = −0.22, P < .05) and clinical performance (r = −0.34, P < .01) in the internal medical clerkship; these help-avoidance behaviors were also positively correlated with students’ presentation at a grade adjudication committee (r = 0.20, P < .05). Additionally, students’ perceptions of the learning environment varied as a function of their phase of training. Conclusions Medical students’ perceptions of the learning environment are related, in predictable ways, to their use of self-regulated learning behaviors; these perceptions seem to change across medical school.


Journal of Advanced Academics | 2009

Beyond Grades in Online Learning: Adaptive Profiles of Academic Self- Regulation Among Naval Academy Undergraduates

Anthony R. Artino; Jason M. Stephens

Educational psychologists have long known that students who are motivated to learn tend to experience greater academic success than their unmotivated counterparts. Using a social cognitive view of self-regulated learning as a theoretical framework, this study explored how motivational beliefs and negative achievement emotions are differentially configured among students in a self-paced online course. Additionally, this study examined how these different motivation-emotion configurations relate to various measures of academic success. Naval Academy undergraduates completed a survey that assessed their motivational beliefs (self-efficacy and task value); negative achievement emotions (boredom and frustration); and a collection of outcomes that included their use of self-regulated learning strategies (elaboration and metacognition), course satisfaction, continuing motivation, and final course grade. Students differed vastly in their configurations of course-related motivations and emotions. Moreover, students with more adaptive profiles (i.e., high motivational beliefs/low negative achievement emotions) exhibited higher mean scores on all five outcomes than their less-adaptive counterparts. Taken together, these findings suggest that online educators and instructional designers should take steps to account for motivational and emotional differences among students and attempt to create curricula and adopt instructional practices that promote self-efficacy and task value beliefs and mitigate feelings of boredom and frustration.


Journal of Graduate Medical Education | 2012

You Can't Fix by Analysis What You've Spoiled by Design: Developing Survey Instruments and Collecting Validity Evidence

Gretchen Rickards; Charles D. Magee; Anthony R. Artino

Surveys are frequently used in graduate medical education (GME). Examples include resident satisfaction surveys, resident work-hour questionnaires, trainee self-assessments, and end-of-rotation evaluations. Survey instruments are also widely used in GME research. A review of the last 7 issues of JGME indicates that of the 64 articles categorized as Original Research, 50 (77%) included surveys as part of the study design. Despite the many uses of surveys in GME, the medical education literature provides limited guidance on survey design,1 and many surveys fail to use a rigorous methodology or best practices in survey design.2 As a result, the reliability and validity of many medical education surveys are uncertain. When surveys are not well designed, the data obtained from them may not be reproducible and may fail to capture the essence of the attitude, opinion, or behavior the survey developer is attempting to measure. A plethora of factors affecting reliability and validity in surveys includes, but is not limited to, poor question wording, confusing question layout, and inadequate response options. Ultimately, these problems negatively impact the reliability and validity of survey data, making it difficult to draw useful conclusions.3,4 With these problems in mind, the aim of the present editorial is to outline a systematic process for developing and collecting reliability and validity evidence for survey instruments used in GME and GME research. The term survey is quite broad and could include questions used in a phone interview, the set of items used in a focus group, and the items on a self-administered patient survey. In this editorial, we limit our discussion to self-administered surveys, which are also sometimes referred to as questionnaires. The goals of any good questionnaire should be to develop a set of items that every respondent will interpret the same way, respond to accurately, and be willing and motivated to answer. The 6 questions below, although not intended to address all aspects of survey design, are meant to help guide the novice survey developer through the survey design process. Addressing each of these questions systematically will optimize the quality of GME surveys and improve the chances of collecting survey data with evidence of reliability and validity. A graphic depiction of the process described below is presented in the figure. FIGURE A Systematic Approach to Survey Design for Graduate Medical Education Research


Journal of Continuing Education in The Health Professions | 2010

Aging and cognitive performance: Challenges and implications for physicians practicing in the 21st century

Steven J. Durning; Anthony R. Artino; Eric S. Holmboe; Thomas J. Beckman; Cees van der Vleuten; Lambert Schuwirth

The demands of physician practice are growing. Some specialties face critical shortages and a significant percentage of physicians are aging. To improve health care it is paramount to understand and address challenges, including cognitive issues, facing aging physicians. In this article, we outline several issues related to cognitive performance and potential implications associated with aging. We discuss important findings from other fields and draw parallels to the practice of medicine. In particular, we discuss the possible effects of aging through the lens of situated cognition theory, and we outline the potential impact of aging on expertise, information processing, neurobiology, intelligence, and self-regulated learning. We believe that work done in related fields can provide a better understanding of physician aging and cognition, and thus can inform more effective approaches to continuous professional development and lifelong learning in medicine. We conclude with implications for the health care system and areas of future research.

Collaboration


Dive into the Anthony R. Artino's collaboration.

Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Ting Dong

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William R. Gilliland

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

David F. Cruess

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Donna M. Waechter

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Kent J. DeZee

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lauren A. Maggio

Uniformed Services University of the Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge