Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sean R. Tunis is active.

Publication


Featured researches published by Sean R. Tunis.


Annals of Internal Medicine | 1994

Internists' Attitudes about Clinical Practice Guidelines

Sean R. Tunis; Robert Hayward; Mark C. Wilson; Haya R. Rubin; Eric B Bass; Mary Johnston; Earl P. Steinberg

Documentation of unexplained geographic variations in medical practices [1] and use of interventions inappropriately [2] or before their effectiveness has been established [3] has led to the rapid proliferation of clinical practice guidelines. These systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances [4] are an attempt to discourage ineffective medical practices, encourage effective practices, and improve health outcomes [5-10]. Despite increasing enthusiasm for guidelines, evidence exists that guidelines often do not affect clinical practices or health outcomes [11-17]. One possible obstacle to effective guideline implementation is physician concern about the intent and validity of these documents [18-22]. Many physicians first encounter guidelines in the context of peer review, utilization management, and quality control programs, experiences that they may not perceive positively. Such outsider scrutiny is considered by some to be a challenge to autonomous clinical decision making. Little systematic study of physicians attitudes toward guidelines, however, has been done [23, 24]. We did a national survey of a random sample of American College of Physicians (ACP) members to assess ACP members familiarity with, confidence in, and attitudes about guidelines issued by ACP and other organizations and members perceptions of the effect of ACP and other guidelines on their practices. Methods Questionnaire We designed a practice guidelines questionnaire that sought information about 1) demographic and professional characteristics of responders, including their year of graduation from medical school, board certification, academic affiliations, hours per week devoted to patient care, practice type, practice setting, and the principal method of clinical reimbursement; 2) responders ratings of their familiarity with and confidence in practice guidelines issued by ACP, various medical specialty societies, and other major health care organizations; 3) responders attitudes regarding guidelines and their effects on medical care; 4) any change in responders clinical practice during the last year as a result of guidelines; and 5) responders ratings of the importance of practice guidelines and other sources of information for clinical decision making. The questionnaire was approved by the ACP Clinical Efficacy Assessment Subcommittee (CEAS) after it was pilot-tested with 80 primary care physicians affiliated with the Johns Hopkins Health Plan [25] and 95 volunteers at the 1991 ACP Annual Meeting. Familiarity with, confidence in, and attitudes about guidelines were assessed using 5-point ordinal scales, with anchors appropriate to the judgment requested (for example, 5 = very familiar to 1 = not familiar; 5 = great confidence to 1 = no confidence). Estimated effects of guidelines on various aspects of medical practice were scored as likely to decrease, to have no effect, and likely to increase Sample We drew a stratified random sample of 400 associates, 1000 general internists, and 1200 internist specialists from ACP membership records. Members with specialty certification were oversampled because their response rate to previous ACP surveys was lower than that of other members. (Johnson White L. Personal communication.) The total sample of 2600 physicians represented 3.5% of the ACP membership and included internists from every state. The sample size for the survey was based on assumptions of a response rate of 65% and a 2-point standard deviation on the 5-point ordinal scales that comprised most items in our survey. Under these assumptions, a sample size of 2600 gave us greater than 90% power to detect a 1-point difference for comparisons involving the smallest subspecialty groups (for which we expected about 100 responders). Survey Procedure The survey was mailed in December 1991, accompanied by a letter from the executive vice-president of ACP encouraging participation in the survey. Follow-up mailings were sent to nonresponders 1 and 2 months later, and data collection was terminated 6 weeks after the third mailing. Each physician was assigned a number that was placed on the cover sheet of mailed questionnaires. The same number was used to access information in an ACP membership database about physician year of graduation, specialty certification, practice location, and membership status. Cover sheets were removed from returned questionnaires after recording the identification number. In this way, we were able to identify nonresponders, compare demographic characteristics of responders and nonresponders, and preserve the anonymity of physicians during data abstraction and analysis. Analysis Data from returned questionnaires were double-entered and audited. Self-reported year of graduation and subspecialty certification were compared with information obtained from ACP databases, with statistics of 0.98 and 0.53, respectively. The low for subspecialty certification resulted from responders reporting a qualification that was not in ACP files. Many physicians join ACP before completing their specialty training, and ACP has not previously updated its files for specialty status more often than once every 3 years. Therefore, we used self-reported professional characteristics from our survey in our analysis. To compensate for oversampling of subspecialists and undersampling of Associate ACP members, the ratio of the proportion of Associates, generalists, and specialists in the final sample to their true proportion in the ACP membership was used to generate adjustment weights that were applied to all analyses. Questions answered by fewer than 90% of responders were not considered in the analysis. Frequency distributions of responses to questionnaire items were examined before statistical tests were selected and applied. Physician characteristics were coded as dichotomous variables except for year of graduation, which was recoded into four categories. We then tested the statistical significance of associations between physician characteristics and dichotomous question responses (for example, impact compared with no impact on clinical practice) using the chi-square test. For items with 5-point response scales, we tested the statistical significance of differences between ratings for different physician subgroups using analysis of variance. Stepwise logistic and multiple linear regression models were used to explore relations between sets of physician characteristics and responses to individual questions. Given the multiple comparisons, differences in response distributions between physician subgroups were considered statistically significant at the P 0.005 level. We developed an overall measure of physician attitudes toward guidelines by summing ordinal scale ratings regarding four positive views of guidelines (strength of agreement with statements that guidelines generally are good educational tools, unbiased syntheses of expert opinion, a convenient source of advice, and intended to improve quality of care) and the inverse of ordinal ratings regarding four negative views of guidelines (oversimplified or cookbook medicine, a challenge to physician autonomy, too rigid to apply to individual patients, and intended to cut costs). Possible scores ranged from 8 to 40, with 40 representing the most positive attitudes about guidelines. The internal consistency coefficient (Chronbach ) for this scale was 0.76. Results Description of Responders and Nonresponders Of the 2600 physicians in our original sample, 35% returned questionnaires after the first mailing, 18% after the second mailing, and 8% after the third mailing. Eighty-seven physicians from the original sample were determined to be ineligible because of death (n = 5), retirement (n = 48), survey damage (n = 1), or no forwarding address (n = 33). Thus, we received completed questionnaires from 1513 (60%) of 2513 eligible physicians. Characteristics of those who did and did not respond to the survey were compared using data available from the ACP membership file. Responders and nonresponders were similar in terms of their year of graduation from medical school, geographic location of practice, and prevalence of generalist and subspecialty certification in internal medicine (Table 1). In addition, the characteristics of responders were similar to those of the target population of College members, after adjustment for intentional oversampling in some strata (Table 2). Table 1. Characteristics of Members of the American College of Physicians: Survey Responders and Nonresponders Table 2. Professional Characteristics of Survey Responders* Familiarity with Guidelines The percentage of responders reporting that they were familiar (4 or 5 on a scale of 1 = not at all familiar to 5 = very familiar) with the content of selected guidelines varied from 11% for ACP exercise stress test guidelines to 59% for National Cholesterol Education Program (NCEP) guidelines (Table 3). Ratings of familiarity with guidelines that have actually been published were significantly greater than ratings for a fictitious ACP guideline about computed tomography of the head (mean score, 1.8 of 5; P < 0.001 for comparison with mean score for any of the other guidelines), which was included in the survey to provide a measure of the degree to which familiarity scores might be inflated because of a general desire to appear knowledgeable. The responders who reported familiarity with this nonexistent guideline (7% of total) provide a baseline for interpretation of the familiarity reported for the other guidelines. Table 3. Responders Familiarity with Selected Clinical Practice Guidelines by Responder Specialty Subspecialists were more likely to report familiarity with guidelines pertaining to their own subspecialty than with those pertaining to other subspecialties or to general medical practice. For example, board-certified


Annals of Internal Medicine | 1993

More Informative Abstracts of Articles Describing Clinical Practice Guidelines

Robert Hayward; Mark C. Wilson; Sean R. Tunis; Eric B Bass; Haya R. Rubin; R. Brian Haynes

Patients hope that their doctors will correctly identify health problems, clearly articulate sensible options for managing each problem, wisely interpret the best available evidence about probable outcomes associated with each option, sensitively solicit patient preferences for each outcome, and thereby implement practices that are likely to achieve the results patients care about. Groups of patients hope that doctors will work to prevent, detect, and treat health problems in a way that maximizes the public good achieved with available health care resources. To meet these dual expectations, doctors face forbidding tasks of clinical decision making. Improved methods for preparing, appraising, and abstracting overviews of available evidence about a clinical topic can bring a broader range of high-quality information within the busy practitioners grasp [1-4]. However, even well-designed overviews often do not synthesize information about benefits, harms, and costs of medical practices in a way that facilitates clinical decision making. Clinical practice guidelines have been defined as systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances [5]. Guidelines that are based on sound scientific evidence, and a trustworthy process for judging the value of alternative practices, could help physicians deal with the information overload they face. Existing practice guidelines, however, vary widely in their quality, and clinicians should carefully consider the validity and applicability of each guideline they read. We propose a structured format for abstracts of articles describing practice guidelines that is based on emerging principles of guideline development and evaluation [6-13]. The proposal is designed to help readers obtain the key information needed to assess the applicability, importance, and validity of any guideline. The proposed structure for more informative abstracts of articles describing clinical practice guidelines has been reviewed and pilot-tested by guideline developers, evaluators, and implementors and by groups of practicing clinicians. Early versions of the proposal were sent to consultants at the National Library of Medicine, the Institute of Medicine, the American College of Physicians (ACP), and the Agency for Health Care Policy and Research. Subsequent versions of the proposal were pilot tested on various published guidelines and were assessed by selected health care organizations for applicability to guidelines under development. An improved proposal was sent to 82 persons active in guideline development, evaluation, and implementation. The Internal Medicine Center to Advance Research and Education circulated the proposal to volunteer practitioners, and the authors solicited feedback from physician and health administrator participants at three national workshops about practice guidelines. The proposal that follows represents an integration of the collective wisdom of our external reviewers. (A glossary of terms used in articles on clinical practice guidelines is supplied in the Appendix.) Proposed Structure and Content for Abstracts of Articles Describing Clinical Practice Guidelines The proposed structure for abstracts of papers describing clinical practice guidelines is summarized in Table 1. Key elements of the guideline development process are captured with statements about the principal objective and target of the guideline, the main practice options and outcomes that were considered, the nature of the evidence and values that were analyzed, the main benefits and harms expected from guideline implementation, the key recommendations, and a statement of whether the recommendations have been tested and by whom they have been endorsed. If details about guideline development methods or supporting evidence are published separately, then the abstract should indicate the existence of supporting documents and the most relevant parts of those should be summarized in the abstract. To be included in their entirety in the online bibliographic databases of the National Library of Medicine, structured abstracts of clinical practice guidelines should not exceed 4096 characters (approximately 500 words) in length. Individual journals may have different requirements. Table 1. Proposed Format for Structured Abstracts of Clinical Practice Guidelines Objective The abstract should state the objective of the guideline by identifying the targeted health problem, the targeted patients and physicians, and the main reason or reasons for developing new recommendations concerning this problem for this population. For guidelines primarily concerned with the management of health conditions, the stage of illness and intent to prevent, diagnose, treat, or palliate the disorder should be specified. For guidelines primarily concerned with health procedures, an intervention and its role in patient management should be defined. Targeted providers, settings, and patients should be specified so that readers can tell who should do what, where it should be done, and to whom. A statement of objective also may specify why a new guideline is needed and how it should be used. Practice guidelines may be written to clarify or resolve clinical controversies, to communicate important new findings from clinical studies, or to promote more effective, efficient, or consistent physician practices at less cost or risk to patients. Guidelines may be used concurrently to help physicians and patients make choices (for example, clinical algorithms), retrospectively to compare physician choices to pre-established criteria (for example, utilization review), or prospectively to set limits on physician choices (for example, preprocedure approval). Options Clinical practice guidelines pertain to clinical decisions. The nature of those decisions should be made explicit by disclosing the main management options considered. The abstract should list principal alternative preventive, diagnostic, or therapeutic strategies available to targeted clinicians and patients. Outcomes The abstract also should specify which health outcomes, such as death, morbidity, quality of life, economic costs, or changes in the process of health care were considered. Evidence To estimate the probable effect of a health intervention on an outcome, relevant data must be defined, gathered, and appraised. Readers need to know what kind of evidence was considered and what methods were used to select and combine evidence from different sources. Potential sources of evidence about clinical outcomes include the results of clinical studies (published or unpublished), expert opinion, information contained in public or private health databases, and direct input from patients or providers. Potential sources of evidence about economic outcomes include the results of cost-effectiveness studies, various fee schedules, and information contained in public or private health claims databases. Potential methods of gathering evidence include computerized searches for articles that satisfy predetermined criteria, reviews of bibliographies of relevant articles, and surveys of relevant persons or organizations [1]. The abstract should state whether explicit methods were used to combine results from scientific studies (for example, meta-analysis) or opinions of experts (for example, Delphi technique) [14, 15]. The abstract may also indicate whether specific criteria were used to gauge the quality of information from different sources [16, 17]. Scientific evidence regarding the likelihood of certain outcomes is often missing or conflicting. It is desirable for the strength of evidence supporting a guideline to be gauged in some way [16]. If gaps in the evidence were bridged formally (for example, through use of decision-analysis models) or informally (for example, best guess of clinical experts), then major assumptions or areas of uncertainty should be acknowledged [11]. Because of the extraordinary time required to properly assemble and review evidence and then to achieve consensus about practice recommendations, the abstract of a practice guideline should specify the publication dates of the most recent evidence considered. If a sensitivity analysis was performed, then any major vulnerability of the guideline to weaknesses in the evidence should be acknowledged. These considerations may be summarized by qualifying a guideline as provisional by specifying dates for expiration or review, by listing studies in progress that could affect the guideline, or by identifying key research priorities. Values The declaration of a recommended practice presumes an implicit or explicit process for judging the relative desirability of the health, economic, and process outcomes associated with alternative practices. These are matters of opinion and value. For example, in developing guidelines for breast cancer screening, it is necessary to weigh the value of preventing a death from breast cancer against the value of avoiding unnecessary harm or anxiety in women who do not have breast cancer. A high value placed on maximizing detection of asymptomatic malignancy may motivate an organization to recommend teaching self-examination of breasts to young women [18]. A relatively greater value placed on avoiding false-positive results may motivate others to not promote such programs [19]. Consequently, the principal sources of such values and the method by which judgments were made should be reported in guideline abstracts. At a minimum, the abstract should identify the major groups that participated in the process of assigning values to outcomes (for example, generalist or specialist physicians, other health care providers, insurers, representatives of health organizations, and special-interest groups) and whether patient preferences were represented. The abstract may also indicate the method used to synthesize opinions from multipl


Journal of General Internal Medicine | 1996

Practice guidelines : What are internists looking for ?

Robert Hayward; Mark C. Wilson; Sean R. Tunis; Gordon H. Guyatt; Karen Ann Moore; Eric B Bass

To determine features of the presentation of clinical practice guidelines that may enhance their use by internists, we conducted a cross-sectional survey to which 1,513 (60%) of 2,513 eligible internists responded. Endorsements by respected colleagues and by major organizations were identified as very important by 72% and 69% of respondents, respectively. Respondents preferred short pamphlets and manuals summarizing a number of guidelines and felt that concise recommendations (86%), synopsis of supporting evidence (85%), and quantification of benefit (77%) were important in guideline presentation. We conclude that guideline developers should gain the endorsement of major organizations and present key aspects in brief, easily assimilated formats.


Health Policy | 1994

Health care technology in the United States

Sean R. Tunis; Hellen Gelband

The US health care system reflects the free market of the US economy--there is no fixed budget and no limit on expenditures in the loosely structured matrix of largely private-sector health industry components. Mainly because of the inaccessibility of adequate health care for a large segment of the population, and because of the enormous cost of care threatens financial ruin for many more people, the first major reform of the system was debated in Congress for most of 1994, though, in the end, no leglislation was passed. One focus of the debate on spending has been the problem of excessive use of expensive medical technology and the need for some control, which, by and large, is lacking in the existing system. Health care technology assessment itself is a thriving industry in the United States, used by government, insurers, medical societies, hospitals, and other groups for their own purposes. At the national policy level, few opportunities for technology assessment to affect the health care industry exist, so most effort is directed at trying to affect medical practice at the level of the individual hospital and practitioner. The discernible effect of technology assessment has been minimal.


JAMA | 1994

Users' Guides to the Medical Literature: III. How to Use an Article About a Diagnostic Test A. Are the Results of the Study Valid?

Roman Jaeschke; Gordon H. Guyatt; David L. Sackett; Eric Bass; Patrick Brill-Edwards; George P. Browman; Deborah J. Cook; Michael Farkouh; Hertzel C. Gerstein; Brian Haynes; Robert Hayward; Anne Holbrook; Elizabeth F. Juniper; Hui Lee; Mitchell Levine; Virginia A. Moyer; Jim Nishikawa; Andrew D. Oxman; Ameen Patel; John Philbrick; W. Scott Richardson; Stephane Sauve; Jack Sinclair; K. S. Trout; Peter Tugwell; Sean R. Tunis; Stephen D. Walter; Mark Wilson


JAMA | 1994

Users' Guides to the Medical Literature: II. How to Use an Article About Therapy or Prevention B. What Were the Results and Will They Help Me in Caring for My Patients?

Gordon H. Guyatt; David L. Sackett; Deborah J. Cook; Eric Bass; Patrick Brill-Edwards; George P. Browman; Deborah Cook; Michael Farkouh; Hertzel C. Gerstein; Brian Haynes; Robert Hayward; Anne Holbrook; Roman Jaeschke; Elizabeth F. Juniper; Andreas Laupacis; Hui Lee; Mitchell Levine; Virginia A. Moyer; Jim Nishikawa; Andrew D. Oxman; Ameen Patel; John Philbrick; W. Scott Richardson; Stephane Sauve; Jack Sinclair; K. S. Trout; Peter Tugwell; Sean R. Tunis; Stephen D. Walter; John Williams


JAMA | 1994

Users' Guides to the Medical Literature: VI. How to Use an Overview

Andrew D. Oxman; Deborah J. Cook; Gordon H. Guyatt; Eric Bass; Patrick Brill-Edwards; George P. Browman; Michael Farkouh; Hertzei Gerstein; Ted Haines; Brian Haynes; Robert Hayward; Anne Holbrook; Roman Jaeschke; Elizabeth F. Juniper; Andreas Laupacis; Hui Lee; Mitchell Levine; Virginia A. Moyer; David Naylor; Jim Nishikawa; Ameen Patel; John Philbrick; Scott Richardson; Stephane Sauve; David Sacked; Jack Sinclair; Brian Strom; K. S. Trout; Sean R. Tunis; Stephen D. Walter


JAMA | 1995

Users' Guides to the Medical Literature: VIII. How to Use Clinical Practice Guidelines A. Are the Recommendations Valid?

Robert Hayward; Mark C. Wilson; Sean R. Tunis; Eric Bass; Gordon H. Guyatt


Health Affairs | 2006

Coverage options for promising technologies: Medicare's 'coverage with evidence development'.

Sean R. Tunis; Steven D. Pearson


JAMA | 1995

Users' Guides to the Medical Literature: VIII. How to Use Clinical Practice Guidelines B. What Are the Recommendations and Will They Help You in Caring for Your Patients?

Mark Wilson; Robert Hayward; Sean R. Tunis; Eric Bass; Gordon H. Guyatt

Collaboration


Dive into the Sean R. Tunis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge