Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter H. Schwartz is active.

Publication


Featured researches published by Peter H. Schwartz.


Philosophy of Science | 2007

Defining dysfunction: Natural selection, design, and drawing a line

Peter H. Schwartz

Accounts of the concepts of function and dysfunction have not adequately explained what factors determine the line between low‐normal function and dysfunction. I call the challenge of doing so the line‐drawing problem. Previous approaches emphasize facts involving the action of natural selection (Wakefield 1992a, 1999a, 1999b) or the statistical distribution of levels of functioning in the current population (Boorse 1977, 1997). I point out limitations of these two approaches and present a solution to the line‐drawing problem that builds on the second one.


Journal of General Internal Medicine | 2008

The ethics of information: absolute risk reduction and patient understanding of screening.

Peter H. Schwartz; Eric M. Meslin

Some experts have argued that patients should routinely be told the specific magnitude and absolute probability of potential risks and benefits of screening tests. This position is motivated by the idea that framing risk information in ways that are less precise violates the ethical principle of respect for autonomy and its application in informed consent or shared decision-making. In this Perspective, we consider a number of problems with this view that have not been adequately addressed. The most important challenges stem from the danger that patients will misunderstand the information or have irrational responses to it. Any initiative in this area should take such factors into account and should consider carefully how to apply the ethical principles of respect for autonomy and beneficence.


Hastings Center Report | 2011

Questioning the Quantitative Imperative: Decision Aids, Prevention, and the Ethics of Disclosure

Peter H. Schwartz

Patients should not always receive hard data about the risks and benefits of a medical intervention. That information should always be available to patients who expressly ask for it, but it should be part of standard disclosure only sometimes, and only for some patients. And even then, we need to think about how to offer it.


Perspectives in Biology and Medicine | 2008

RISK AND DISEASE

Peter H. Schwartz

The way that diseases such as high blood pressure (hypertension), high cholesterol, and diabetes are defined is closely tied to ideas about modifiable risk. In particular, the threshold for diagnosing each of these conditions is set at the level where future risk of disease can be reduced by lowering the relevant parameter (of blood pressure, low-density lipoprotein, or blood glucose, respectively). In this article, I make the case that these criteria, and those for diagnosing and treating other “risk-based diseases,” reflect an unfortunate trend towards reclassifying risk as disease. I closely examine stage 1 hypertension and high cholesterol and argue that many patients diagnosed with these “diseases” do not actually have a pathological condition. In addition, though, I argue that the fact that they are risk factors, rather than diseases, does not diminish the importance of treating them, since there is good evidence that such treatment can reduce morbidity and mortality. For both philosophical and ethical reasons, however, the conditions should not be labeled as pathological. The tendency to reclassify risk factors as diseases is an important trend to examine and critique.


Journal of the American Geriatrics Society | 2013

Caregiver perspectives on cancer screening for persons with dementia: "why put them through it?".

Alexia M. Torke; Peter H. Schwartz; Laura R. Holtz; Kianna Montz; Greg A. Sachs

To describe the perspectives of family caregivers toward stopping cancer screening tests for their relatives with dementia and identify opportunities to reduce harmful or unnecessary screening.


Clinical and Translational Science | 2015

Building a Central Repository for Research Ethics Consultation Data: A Proposal for a Standard Data Collection Tool.

Mildred K. Cho; Holly A. Taylor; Jennifer B. McCormick; Nick Anderson; David Barnard; Mary B. Boyle; Alexander Morgan Capron; Elizabeth Dorfman; Kathryn Havard; Carson Reider; John Z. Sadler; Peter H. Schwartz; Richard R. Sharp; Marion Danis; Benjamin S. Wilfond

Clinical research ethics consultation services have been established across academic health centers over the past decade. This paper presents the results of collaboration within the CTSA consortium to develop a standard approach to the collection of research ethics consultation information to serve as a foundation for quality improvement, education, and research efforts. This approach includes categorizing and documenting descriptive information about the requestor, research project, the ethical question, the consult process, and describing the basic structure for a consult note. This paper also explores challenges in determining how to share some of this information between collaborating institutions related to concerns about confidentially, data quality, and informatics. While there is much still to be learned to improve the process of clinical research ethics consultation, these tools can advance these efforts, which, in turn, can facilitate the ethical conduct of research.


Journal of General Internal Medicine | 2015

How Bioethics Principles Can Aid Design of Electronic Health Records to Accommodate Patient Granular Control.

Eric M. Meslin; Peter H. Schwartz

ABSTRACTEthics should guide the design of electronic health records (EHR), and recognized principles of bioethics can play an important role. This approach was recently adopted by a team of informaticists who are designing and testing a system where patients exert granular control over who views their personal health information. While this method of building ethics in from the start of the design process has significant benefits, questions remain about how useful the application of bioethics principles can be in this process, especially when principles conflict. For instance, while the ethical principle of respect for autonomy supports a robust system of granular control, the principles of beneficence and nonmaleficence counsel restraint due to the danger of patients being harmed by restrictions on provider access to data. Conflict between principles has long been recognized by ethicists and has even motivated attacks on approaches that state and apply principles. In this paper, we show how using ethical principles can help in the design of EHRs by first explaining how ethical principles can and should be used generally, and then by discussing how attention to details in specific cases can show that the tension between principles is not as bad as it initially appeared. We conclude by suggesting ways in which the application of these (and other) principles can add value to the ongoing discussion of patient involvement in their health care. This is a new approach to linking principles to informatics design that we expect will stimulate further interest.


Philosophy of Science | 2014

Small tumors as risk factors not disease

Peter H. Schwartz

I argue that ductal carcinoma in situ (DCIS), the tumor most commonly diagnosed by breast mammography, cannot be confidently classified as cancer, that is, as pathological. This is because there may not be dysfunction present in DCIS—as I argue based on its high prevalence and the small amount of risk it conveys—and thus DCIS may not count as a disease by dysfunction-requiring approaches, such as Boorse’s biostatistical theory and Wakefield’s harmful dysfunction account. Patients should decide about treatment for DCIS based on the risks it poses and the risks and benefits of treatment, not on its disease status.


Theoretical Medicine and Bioethics | 2009

Disclosure and rationality: Comparative risk information and decision-making about prevention

Peter H. Schwartz

With the growing focus on prevention in medicine, studies of how to describe risk have become increasing important. Recently, some researchers have argued against giving patients “comparative risk information,” such as data about whether their baseline risk of developing a particular disease is above or below average. The concern is that giving patients this information will interfere with their consideration of more relevant data, such as the specific chance of getting the disease (the “personal risk”), the risk reduction the treatment provides, and any possible side effects. I explore this view and the theories of rationality that ground it, and I argue instead that comparative risk information can play a positive role in decision-making. The criticism of disclosing this sort of information to patients, I conclude, rests on a mistakenly narrow account of the goals of prevention and the nature of rational choice in medicine.


American Journal of Bioethics | 2015

Placebos, Full Disclosure, and Trust: The Risks and Benefits of Disclosing Risks and Benefits.

Peter H. Schwartz

Consider the following patient: a 40 year old man who has had back pain that radiates down his left leg, on and off for two months. He performs his normal activities and does not have any “red flag” symptoms like fever or weakness. He’s using two commonly prescribed pain medications (ibuprofen and acetaminophen) as needed, and they help somewhat. The pain is slightly better than when it started but not much. He is frustrated and wants to feel better. What can the doctor do for him? First, she can reassure him that the duration of his symptoms is not uncommon for sciatica, a pinched nerve in the back, and there is no reason to believe that something more dangerous is going on. Second, she can advise against invasive steps, such as surgery, which research shows to be useless and potentially dangerous. Third, she can offer a medication that might help, such as a muscle relaxer, tricyclic antidepressant (TCA), or anticonvulsant. Each of these can reduce sciatic pain, though only in a minority of patients and usually by only a moderate amount. Each medication has risks, most commonly symptoms such as drowsiness, dry mouth, constipation, or dizziness, which resolve when the medication is stopped. These medications also have rare severe side effects, such as allergic reactions that could be life threatening. Let’s say that the doctor is considering prescribing amitriptyline, a TCA. Should she utilize the techniques that Alfano suggests to improve the patient’s chance of benefit and reduce the risk of side effects? As Alfano (2015) describes, research shows that mentioning a side effect, such as dry mouth, can increase the chance of its occurring, due to the “expectation-confirmation” mechanism. Recognizing this, the doctor could use the authorized concealment approach that Alfano describes, which builds on an earlier suggestion by Miller and Colloca (2011). Is this approach ethical? A critic might complain that authorized concealment blocks informed consent by eliminating discussion of an important issue. In fact, demonstration of the expectation-confirmation mechanism simply proves what doctors have long suspected and used to justify nondisclosure, as discussed and seminally critiqued by Jay Katz (1984). One can defend authorized concealment by arguing that the side effect has in fact been disclosed, just vaguely, and the doctor and patient are discussing what sort of discussion to have. The process that Alfano describes is a far cry from the complete lack of disclosure that Katz and others have fought against, where doctors don’t even mention the possibility of side effects or of alternative treatments for the patient to consider. In authorized concealment, the patient decides for himself whether hearing about the risk is worth the cost of increasing its chance of occurring. In addition, Alfano’s defense of authorized concealment is strictly circumscribed, to apply only to symptomatic side effects, not to more dangerous or irreversible ones. Amitriptyline has such potential side effects, including blood abnormalities and heart attacks. Even if describing these could increase the chance of their occurring, Alfano’s position does not justify authorized concealment of them, since they are presumably too important to the patient’s making an informed decision about whether to take the medication. Katz (1984) would certainly approve of this limit to authorized concealment, but Carl Schneider (1998) might not. Imagine that some blood abnormality occurs in 1 in 1000 patients taking amitriptyline and causes severe problems for one month when it occurs. Schneider’s (1998) reviews research showing that many patients may have trouble understanding this danger or the probability of its occurring and may react irrationally to information about it. Patients may thus rationally defer to the doctor to decide whether the benefit of the medication is worth the risk. Note that from this perspective a physician may also ethically conceal purely symptomatic side effects as well, without asking for authorization. Deciding whether to utilize authorized concealment, or other techniques, depends in part on questions about the magnitude of the effect. If the chance of dry mouth is 20% in an uninformed patient, and this goes up to 23% if that side effect is mentioned, then authorized concealment may simply not be worth it. Even asking the patient for authorized concealment may confuse or concern individuals, causing anxiety or even raising the chance of developing other symptoms. As Alfano says, such questions could be empirically studied. But there are also more global impacts that may be hard to empirically measure: just bringing up the possibility of authorized concealment or deception may reduce the patient’s trust in the truthfulness and completeness of what his physician says. One patient asked to authorize concealment may feel valued and respected, but another who is asked may simply not know what to think, or may start to question other times the doctor seemed less than completely forthcoming. Another technique that Alfano describes is “priming” to increase the chance that the medication will work, taking advantage of the attentional-somatic feedback loop. The doctor may increase the chance of amitriptyline’s reducing the patient’s pain by failing to mention that it works in only a minority of patients, and by projecting a strong conviction that it will work for this patient. Priming raises more ethical questions than authorized concealment since it does not involve the patient’s agreement. In fact, the doctor cannot ask the patient to agree: If the doctor tells the patient that knowing the true probability of the medication working may reduce the chance, then many patients will figure out that the probability must not be very good. Perhaps a discussion could have happened earlier, before this specific problem arose, where the doctor asked the patient for consent to utilize “fake optimism” in the future. The main problem with this approach is that the patient is being asked to make a choice about situations where the medical problem, prognosis, potential therapies, magnitude of benefit and risk, etc., are all undefined. The doctor could ask the patient to trust her to decide when to utilize fake optimism, but this asks for a lot. A patient who agreed would be putting a good deal of trust in the doctor’s judgment, both about the situation and about what the patient would want to know. Schneider (1998) argues that this is exactly the sort of trust that we put in our doctors, and he argues that we should. And, again, if Schneider is right, then it looks like we might also trust our doctors to decide when to conceal symptomatic side effects, without asking for our authorization. Now the concerns of Katz (1984) and others come to the fore, since we have traveled so far from shared evaluation of risks and benefits. We are back to medicine’s old habit, of doctors deciding when it is good for the patient to consider risks and benefits. The fact that amitriptyline has a low chance of reducing risk may convince a patient that it’s not worth the risks it carries, both the common, symptomatic ones, but also the unlikely but serious ones. This suggests that amitriptyline’s low efficacy would be material to a reasonable patient, and, according to the Reasonable Person Standard (Beauchamp & Childress, 2009, pp. 122–23) the low efficacy should be disclosed. On the other hand, one might argue that a fully informed and reflective reasonable person, recognizing the dangers of knowing this piece of information, would feel that the information is not that important. As is often the case, the devil is in the details of applying the standard. Once again, the magnitude of the effect may be particularly relevant. If the effect is small, leaving out a key fact about the medication’s probability of working may simply not be worth it. Or, to put it another way, a doctor who justifies her reticence based on the priming effect may be giving an inadequate defense for this failure of communication. The doctor could truthfully say to the patient that the medication works in only a minority of people, but could also put this in the most positive light, saying that it may work and that she has seen it work before. In this way, the doctor is being encouraging without being deceptive. Early in the paper, Alfano convincingly argues that “placebo” is not a natural kind term. His point is somewhat unsurprising, since the concepts of medicine and psychology are rarely (if ever) natural kinds. His example of congestive heart failure is apt, since that category encompasses a range of conditions of with extremely variable physiology, causes, treatments, and prognosis. We group conditions together as “congestive heart failure” due to their common characteristics and practical management, but their differences are also crucially important. Similarly, although the expectation-confirmation mechanism and attention-somatic feedback loops differ, utilizing either to improve outcomes raises similar ethical issues. Placebo may be an ethical category as much as a biological and psychological one.

Collaboration


Dive into the Peter H. Schwartz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Morgan Capron

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge