Barron H. Lerner
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Barron H. Lerner.
American Journal of Public Health | 1993
Barron H. Lerner
New York City began Americas first campaign to control tuberculosis in 1893, and the disease declined until the 1970s. Throughout the 20th century, New York relied on three control strategies: screening, supervised therapy, and detention of noncompliant persons. Officials consistently identified the persistent foci of tuberculosis to be minorities and the poor, and they concentrated efforts among these populations. Recently, however, in the setting of rising human immunodeficiency virus infection and homelessness, tuberculosis--including multidrug-resistant strains--has returned to New York with a vengeance. Tuberculosis control in the city has been limited by two problems that hamper many public health programs: (1) antituberculosis measures, while appropriately targeting the poor, have been inconsistently funded and poorly coordinated; and (2) efforts have emphasized detection and treatment of individual cases rather than improvement of underlying social conditions. Renewed efforts by New York and other cities must address these limitations.
Nature Reviews Cancer | 2002
Barron H. Lerner
Breast cancer activism has become a fixture in the United States, where fundraising events are ubiquitous and government financing of research into the disease has skyrocketed. Activists in other countries are now reporting similar accomplishments. Here, predominantly using the United States as a case study, I analyse the recent successes of breast cancer activism. I also raise a series of questions about the future goals of activism.
Annals of Internal Medicine | 1995
Barron H. Lerner; David J. Rothman
It is no exaggeration to declare that the greatest blot on the record of medicine in the 20th century is the role played by German physicians in the Nazi era. At the postwar trial at Nuremberg, the court found 15 German physicians guilty of war crimes and sentenced 7 of them to death [1]. After the trial, the German medical establishment carefully cultivated the theory that the violations that had occurred were the acts of this handful of physicians working in a few notorious concentration camps [2]. Until the mid-1960s, most commentators accepted this version of the events. Not the profession of medicine, but only a few Nazi henchmenmore madmen than men of sciencewere implicated in the Holocaust. Indeed, the trial of the Nazi physicians at Nuremberg, the verdict, and even the Nuremberg Code did not receive sustained attention between 1945 and 1965. Events in Nazi Germany seemed altogether irrelevant to physicians in the United States. We now know better. The profession, not just a handful of physicians, was implicated in the gross offenses that occurred under Nazi rule. Beginning in the 1980s, many historians, German and American, have shown how pervasive the corruption was and the full extent to which Nazism permeated German medicine [3-7]. Fully 45% of German physicians belonged to the Nazi party, a percentage higher than that for any other profession [8]. Dissenters were scarce. German physicians began to elevate service to the state above medical ethics well before the Final Solution was implemented in 1942, and even before the Nazi party took power in 1932. In the opening years of the 20th century, German physicians promoted policies of racial hygiene and eugenics in their eagerness to limit the reproduction of persons believed to have hereditary deficiencies. Between 1933 and 1939, they sterilized an estimated 400 000 Germans with mental disturbances [9]. German psychiatrists designed and implemented the notorious T-4 program, in which so-called euthanasia was done on handicapped or retarded children and adults [10, 11]. In effect, the goal of producing a pure Aryan race took precedence over such fundamental ethical principles as the integrity of the body and commitment to the well-being of the individual patient. During the war years, Nazi propagandawith the active cooperation of physiciansdepicted Jews as a metaphor for disease, thereby legitimating the horrors of the Final Solution [12]. Physicians also put to death psychiatric patients to free up hospital beds for military purposes [13], and they were critical to the operation of concentration camps, doing grisly experiments on prisoners and deciding who was fit enough to work and who should be exterminated. As Bruno Muller-Hill has concluded [14], Germans saw Auschwitz as a shrine to science and technology, and it was the medical profession that promoted this perversion. The article by Ernst in this issue [15] adds to our understanding of the role of the medical profession and Nazism in two important ways. First, we learn that the corruption of physicians was not unique to Germany. Ernst illuminates the events that occurred in the Vienna Faculty of Medicine, beginning with the Nazi takeover in March 1938. As in Germany, we find the expulsion of Jewish faculty members, the filling of vacated posts with former colleagues willing to swear loyalty to Hitler, and the subsequent participation of these persons in euthanasia programs and human experimentation. Second, Ernsts story reminds us how critical self-interest was in shaping the events that occurred. Like their German counterparts [16], Austrian physicians were not so much ideologues as petty opportunists. Getting rid of Jewish faculty promised not only more paying patients but also academic advancement. The shame of the profession was not only that it succumbed to noxious ideas but that it could not withstand simple and straightforward greed and ambition. The prominent role of Jews in medicine in Vienna made the events in the Faculty particularly dramatic. Seventy-eight percent of the Faculty was forced out, a percentage larger than that in any other European university faculty. Ernst notes that the remaining professors turned a blind eye, were afraid, or converted to Nazism [15]. These events left the Faculty, once home to numerous Nobel Laureates, depleted of outstanding researchers and teachersa situation that persisted for years after the war. But they created vacancies for those who otherwise would not have qualified for positions. The conspiracy of silence characteristic of the German medical profession in the immediate postwar period [7] was no less powerful in Austria. Viennese Faculty members who had indisputably committed atrocities, such as exposing Dachau inmates to freezing seawater, were allowed to resume their careers; anatomical specimens from the corpses of executed persons remained (and still remain) in use. That Ernsts account is the first to outline this history shows how thoroughly memory was repressed. Why should we revisit these events years later? Not because we anticipate another Holocaust but because the medical profession must be ever alert to challenges to the integrity of its ethics, particularly when they emanate from state authority. The record of the U.S. medical profession is, on the whole, commendable, but important lapses have occurred in the United States as well as in other countries. Between 1907 and 1941, U.S. physicians sterilized nearly 40 000 persons, almost all of them mentally disabled and incapable of giving consent [17, 18]. Routine reports of Nazi sterilization and euthanasia programs in the Journal of the American Medical Association generated little response [19]. During the hot war of 1940 to 1945, U.S. investigators often ignored consent in their clinical experiments to discover antidotes for dysentery or malaria [20]. Later, physicians during the cold war ignored consent when researching the effects of radiation on the body [21]. Indeed, between 1945 and 1965, physicians in their war against disease often allowed the needs of research to trump individual well- being [22]. Today, U.S. medicine confronts new ethical challenges. These include the stance of the profession toward state-sanctioned capital punishment (should the physician be anywhere near the execution chamber?) and toward legislative approval for physician-assisted suicide or aid-in-dying (can an ethic of do no harm be reconciled with such an act?) [23, 24]. At the same time, as U.S. medicine undergoes its most profound reorganization to date, physician self-interest is directly pitted against the well-being of patients (what medical services may primary care gatekeepers limit or withhold to receive financial rewards?) [25]. Medicines role in the Holocaust is now being explored with commendable vigor. The decision by Ernst to tell a story heretofore only whispered about is altogether praiseworthy. It is incumbent on the medical profession to read these findings closely, not only to get the record right, but to remind itself of the perils that follow when its ethics take second place to external demands. The Hippocratic Oath has lost none of its relevance: I will use my power to help the sick to the best of my ability and judgment; I will abstain from harming or wronging any man by it.
American Journal of Public Health | 1999
Barron H. Lerner
Women who test positive for a genetic breast cancer marker may have more than a 50% chance of developing the disease. Although past screening technologies have sought to identify actual breast cancers, as opposed to predisposition, the history of screening may help predict the societal response to genetic testing. For decades, educational messages have encouraged women to find breast cancers as early as possible. Such messages have fostered the popular assumption that immediately discovered and treated breast cancers are necessarily more curable. Research, however, has shown that screening improves the prognosis of some--but not all--breast cancers, and also that it may lead to unnecessary interventions. The dichotomy between the advertised value of early detection and its actual utility has caused particular controversy in the United States, where the cultural climate emphasizes the importance of obtaining all possible medical information and acting on it. Early detection has probably helped to lower overall breast cancer mortality. But it has proven hard to praise aggressive screening without exaggerating its merits. Women considering genetic breast cancer testing should weight the benefits and limitations of early knowledge.
Annals of Internal Medicine | 1998
Barron H. Lerner
The recent consensus meetings on screening mammography for women 40 to 49 years of age generated great controversy. Critics of the consensus statements, particularly the January 1997 decision by a National Institutes of Health (NIH) panel not to recommend routine screening, used language that was often vitriolic and accusatory [1-3]. In attempting to explain why these efforts at consensus generated such antagonism, various commentators have convincingly argued that these debates were not really about the scientific value of mammography. Indeed, it has been claimed that there is broad agreement on what the data show [4]. Instead, as Fletcher [5], Ernster [6], and others [7, 8] have asserted, public acrimony reflected the entrance of political, economic, legal, and interest group concerns into the screening process [9]. A related issue has received less attention: how the language of advocacy may itself polarize scientific discussion, leaving physicians and patients without adequate guideposts for applying early detection of breast cancer to clinical practice. I argue that the rhetoric of a decades-old war against breast cancer framed the recent arguments about breast cancer screening. Although efforts to control other cancers and other diseases have also used military metaphors, breast cancer provides one of the most vivid examples of how metaphoric language enters scientific debate and, in turn, may influence such debate. This paper will examine two historical controversies: 1) debates over whether early detection of cancers improved survival and 2) debates over biologically indeterminate precancers that early detection often revealed. By revisiting the role of military metaphors, history can help explain why these debates have been so divisive and why the value of early detection has, at times, been oversold. This analysis has direct implications for genetic testing, where the next battle is already being fought. The War on Breast Cancer More than 20 years ago, Susan Sontag pointed out that disease metaphors were not simply words but rather that they often acquired a striking literalness and authority [10]. In the case of cancer, the use of war metaphors implied that the disease was an actual enemy to be vanquished on a medical battlefield. Such military language has been particularly prominent in breast cancer. In 1936, activists formed the Womens Field Army, whose war cry was for trench warfare with a vengeance against a ruthless killer [11, 12]. Although the Womens Field Army focused on all cancers in women, breast cancer generated increasing interest and controversy in the years after World War II. Not only did breast cancer kill more women than other cancers, but its location in an external organ made efforts at tracking down the enemy seem promising and urgent [13]. Moreover, given the association of womens breasts with intimacy and sexuality, detection and removal of breast cancer caused particular fear and anxiety. As one writer stated, there was a pathological national anxiety bordering on hysteria about breast cancer [13]. In such a setting, military imagery gave force and direction to screening and treatment programs. Victory over breast cancer, physicians remarked, required a carefully planned military campaign [14] and an increase [in] the caliber of our weapons [15]. Whereas breast lumps had previously been discovered by accident, efforts after 1945 urged both women and physicians to find hidden masses through breast examination. Hands used in this manner became weapons [13]. This new strategy stemmed from the longstanding theory that breast cancer began as a tiny focus that grew locally in a predictable and gradual manner before spreading. Data showed that stage I cancers (those confined to the breast) carried a better prognosis than those involving axillary nodes or other organs [14]. These statistics encouraged the American Cancer Society to promote early detection of these small, presumably localized lumps [16]. If the value of early detection was a given, so was the surgery that followed: radical mastectomy. Popularized by Halsted in the late 1800s, the operation reflected the centrifugal model of cancer spread and the belief that curative surgery required removal of all cancer cells [17, 18]. Radical mastectomy entailed resection of the breast, surrounding tissues, and both pectoral muscles and complete axillary node dissection. This extensive and disfiguring operation was recommended even for small, apparently localized tumors [19]. Debates over Early Detection Beginning in the 1950s, controversy arose over the early detection of small breast cancers. Directly challenging the existing paradigm, some commentators claimed that early detection and aggressive treatment had little effect on the natural history of breast cancer. Instead, they argued, biological factors, such as tumor virulence and immune response, determined the fate of patients. MacDonald first proposed this theory of biological predeterminism in 1951, on the basis of research that revealed no consistent association between the extent of breast cancer and the time that elapsed before a woman showed a detected lump to a physician. Using a somewhat biased sample, Mac-Donald also found that the size of the primary breast lesion was not reliably related to distant disease: Fifty-six percent of tumors 1 cm or smaller had positive axillary nodes, and 23% of tumors larger than 5 cm had negative nodes [20]. A group of biometricians extended MacDonalds conclusions, arguing that the 70% 5-year survival rates (the standard at the time) credited to early detection and radical surgery were somewhat inflated [21-24]. That is, physicians were inappropriately counting as cures patients whose cancers, if left undetected, would not have killed them. As McKinnon wrote, Curing non-lethal lesions does not reduce mortality [23]. This work presaged such biostatistical concepts as lead-time and length biases [25]. National data seemed to confirm McKinnons argument: Despite 50 years of radical surgery, the breast cancer mortality rate-roughly 25 per 100 000 persons-remained essentially unchanged [21-23]. This stationary death rate led biological predeterminists to term the association of early treatment and curability a shibboleth [20]. Black and Speer [22] claimed that without a reliable connection among delay in diagnosis, extent of disease, and curability, it was wrong to equate early lesions with small size and lack of metastasis. Rather, biological type-the propensity of spread and development of remote metastases-plays the predominant role in determining the outcome [26]. Thus, stage I tumors did better not because of early detection and removal but because most of them were non-progressive. Furthermore, small lesions that metastasized early challenged the standard view of how breast cancer spread. The majority of so-called operable cases of breast cancer, Black and Speer argued, have already undergone occult dissemination at the time of surgery [22]. In turn, this conclusion called into question the need to remove all cancerous tissue. Eventually, controlled trials demonstrated that radical mastectomy was unnecessary [17]. Despite their seemingly gloomy message, the biological predeterminists were hardly nihilists. Even MacDonald remarked that the prognosis of 25% of breast cancers could be improved through early detection and radical mastectomy [27]. Nevertheless, many predeterminists were prone to hyperbole. For example, Lees, after summarizing the existing paradigm of breast cancer, wrote Now this is all non-sense and contradicted by practical experience [26]. Crile accused anticancer agencies of creating a new disease, cancer phobia, that causes more suffering than cancer itself [28]. Opponents of biological predeterminism responded with perhaps greater intolerance, seeking to demolish even reasonable claims of the predeterminists [29]. A 1952 presentation by Lees provoked Taylor to suggest that his colleagues might want to rend [Lees] limb from limb [26]. Referring to McKinnon, Taylor stated that the devil can quote scripture to his purpose [26]. Behind this banter lay intense frustration and anger that discouraged rational analysis [30, 31]. Even as MacDonalds opponents contested his data, they were especially perturbed that someone had challenged standard cancer-control philosophy. As several physicians wrote in response to Criles call for less radical surgery, Dr. Crile offers a dangerous, fatalistic philosophy of cancer [32]. The war on cancer metaphor had important benefits, both in publicizing breast cancer and in generating substantial research funding [33, 34]. However, it discouraged physicians and patients from acknowledging the ambiguous results that early detection often produced; limited warfare held little appeal. A similarly aggressive attitude would characterize debates about lesions that were not cancers at all. Debates over Precancers Physicians had long identified hyperplastic breast lesions that had not invaded the basement membrane [35]. By the 1930s, pathologists, believing that such lesions were precancers, began to term them ductal or lobular carcinoma in situ [36]. In this section, I review debates about the lobular variant. Lesions revealing lobular carcinoma in situ were of limited concern before 1950. Because they were nonpalpable, they were discovered only incidentally, usually during histologic examination of benign nodules. With the growing emphasis on early detection and biopsy of suspicious lumps, diagnosis of lobular carcinoma in situ increased substantially by the 1970s [37]. Since the widespread diffusion of mammography after 1975, detection of in situ carcinomas, particularly the ductal type, has become even more commonplace [38, 39]. Lobular carcinoma in situ raised two questions: 1) Were such lesions inevitably precancerous? 2) Did detection mandate mastectomy? Again, debate ensued. A few surg
Annals of Internal Medicine | 1991
Barron H. Lerner
Because of recent changes in Federal Food and Drug Administration (FDA) regulations, new medications may now be marketed before completion of rigorous controlled testing. In order to understand the ramifications of this development, it is instructive to recall the introduction of the sulfonamides in the 1930s. The sulfonamides, the first effective antibacterial agents, were marketed in an era of relatively few regulations. Although investigators at times designed controlled trials to evaluate use of the drugs, both researchers and practitioners generally prescribed them for severe infections, despite a lack of conclusive data as to their efficacy. The clinical usefulness of sulfonamides for a given condition often became known through uncontrolled case studies and comparisons with historical control groups. Given the relaxation of FDA regulations, this method of drug evaluation may again become more commonplace.
The New England Journal of Medicine | 2011
Barron H. Lerner
Can we blend the moral passion of anti–drunk-driving activism with epidemiologically based strategies for saving lives on the roads? The history of efforts to prevent automobile crashes offers lessons on various approaches and their possible synergy.
The Lancet | 2009
Barron H. Lerner
Barron H Lerner a There is no question that Lorenzo Odone lived until the age of 30 years because his parents, Augusto and Michaela Odone, defied doctors and developed a mixture of two cooking oils as a possible treatment for their sons devastating disease. The 1992 film Lorenzos Oil, which commemorated this heroic effort, became an inspirational saga for other patients and families dealing with incurable conditions. Yet Lorenzos story tells us as much about the limitations of medical research as it does about its triumphs.
The New England Journal of Medicine | 2009
Barron H. Lerner
The incidence of familial dysautonomia has decreased precipitously since population screening began in 2001. Dr. Barron Lerner writes that the potential disappearance of new cases of a disease raises profound questions.
The American Journal of the Medical Sciences | 2000
Barron H. Lerner
Although clinicians without a sense of history may not be condemned to repeat the past, the historical record offers many informative lessons. For one thing, history demonstrates the changing nature of scientific knowledge; current understandings of health and disease may prove as ephemeral as earlier discarded theories. In addition, history reminds us that social and cultural factors influence how physicians diagnose and treat various medical conditions. When attempting to teach the history of medicine at academic medical centers, instructors should be innovative as opposed to comprehensive. Students and residents are likely to find recent historical issues to be more relevant, particularly when such material can be integrated into the existing curriculum. Provocative topics include depictions of medicine in old Hollywood films, the contributions made by famous physicians at ones own institution, and historical debates over controversial events, such as the Tuskegee syphilis study and the use of lobotomy in mental institutions in the 1950s.