Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Danaher is active.

Publication


Featured researches published by John Danaher.


Big Data & Society | 2017

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Morison; Michael Hogan; Shankar Kalpana; Chris Noone; Burkhard Schafer; Rónán Kennedy; Su-ming Khoo; Muki Haklay; Anthony Behan; Niall O'Brolchain; Maria Helen Murphy; Heike Felzmann; Aisling de Paor; John Danaher

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.


American Journal of Bioethics | 2018

The quantified relationship

John Danaher; Sr Sven Nyholm; Brian D. Earp

The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies.


International Journal of Evidence and Proof | 2011

Blind Expertise and the Problem of Scientific Evidence

John Danaher

Scientific evidence presents a problem for the courts: the subject-matter is often complex; the experts who present the evidence can be cherry picked and biased; and judges and juries are frequently unsure about how to weigh the evidence once it has been presented. This article diagnoses the problems associated with scientific evidence and then proceeds to consider two possible solutions to those problems: (1) the reliability test solution; and (2) the blind expertise solution. The former is currently favoured by law reform agencies in Ireland and England, but the primary focus of this article is on the latter. It is concluded that the blind expertise solution has considerable attractions and should be seriously considered as a reform option.


Science and Engineering Ethics | 2017

Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life

John Danaher

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (1) the literature on technological unemployment and workplace automation; (2) the antiwork critique—which I argue gives reasons to embrace technological unemployment; and (3) the philosophical debate about the conditions for meaning in life—which I argue gives reasons for concern.


Journal of Medical Ethics | 2016

An evaluative conservative case for biomedical enhancement

John Danaher

It is widely believed that a conservative moral outlook is opposed to biomedical forms of human enhancement. In this paper, I argue that this widespread belief is incorrect. Using Cohens evaluative conservatism as my starting point, I argue that there are strong conservative reasons to prioritise the development of biomedical enhancements. In particular, I suggest that biomedical enhancement may be essential if we are to maintain our current evaluative equilibrium (ie, the set of values that undergird and permeate our current political, economic and personal lives) against the threats to that equilibrium posed by external, non-biomedical forms of enhancement. I defend this view against modest conservatives who insist that biomedical enhancements pose a greater risk to our current evaluative equilibrium, and against those who see no principled distinction between the forms of human enhancement.


Ethics and Information Technology | 2016

Robots, law and the retribution gap

John Danaher

We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.


Minds and Machines | 2015

Why AI Doomsayers are Like Sceptical Theists and Why it Matters

John Danaher

An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.


Law, Innovation and Technology | 2013

On the Need for Epistemic Enhancement Democratic Legitimacy and the Enhancement Project

John Danaher

Klaming and Vedder (2010) have argued that enhancement technologies that improve the epistemic efficiency of the legal system (“epistemic enhancements”) would benefit the common good. But there are two flaws to Klaming and Vedder’s argument. First, they rely on an under-theorised and under-specified conception of the common good. When theory and specification are supplied, their CGJ for enhancing eyewitness memory and recall becomes significantly less persuasive. And second, although aware of such problems, they fail to give due weight and consideration to the tensions between the individual and common good. Taking these criticisms onboard, this article proposes an alternative, and stronger, CGJ for epistemic enhancements. The argument has two prongs. Drawing from the literature on social epistemology and democratic legitimacy, it is first argued that there are strong grounds for thinking that epistemic enhancements are a desirable way to improve the democratic legitimacy of the legal system. This gives prima facie but not decisive weight to the CGJ. It is then argued that due to the ongoing desire to improve the way in which scientific evidence is managed by the legal system, epistemic enhancement is not merely desirable but perhaps morally necessary. Although this may seem to sustain tensions between individual and common interests, I argue that in reality it reveals a deep constitutive harmony between the individual good and the common good, one that is both significant in its own right and one that should be exploited by proponents of enhancement.


Archive | 2015

Responsible Innovation in Social Epistemic Systems: The P300 Memory Detection Test and the Legal Trial

John Danaher

Memory Detection Tests (MDTs) are a general class of psychophysiological tests that can be used to determine whether someone remembers a particular fact or datum. The P300 MDT is a type of MDT that relies on a presumed correlation between a detectable neural signal (the P300 “brainwave”) in a test subject, and the recognition of those facts in the subject’s mind. The P300 MDT belongs to a class of brain-based forensic technologies which have proved popular and controversial in recent years. With such tests increasingly being proffered for use in the courtroom—to either support or call into question testimony—it would behoove the legal system to have some systematic framework for ensuring that they are used responsibly, and for this framework, in turn, to play a part in future research and development of this technology. In this paper, I defend one such framework for ensuring that this is the case: the legitimacy enhancing test. According to this test, it is appropriate to make use of technologies such as the P300 MDT whenever doing so would (probably) enhance the legitimacy of the trial. I argue that this test addresses tensions between scientific and legal norms of evidence, and exhibits a number of additional virtues including unification, simplicity and flexibility.


Science and Engineering Ethics | 2018

Why We Should Create Artificial Offspring: Meaning and the Collective Afterlife

John Danaher

This article argues that the creation of artificial offspring could make our lives more meaningful (i.e. satisfy more meaning-relevant conditions of value). By ‘artificial offspring’ I mean beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful and valuable. The first is that the existence of a collective afterlife—i.e. a set of human-like lives that continue after we die—is likely to be an important source and sustainer of meaning in our present lives (Scheffler in Death and the afterlife, OUP, Oxford, 2013). The second is that the creation of artificial offspring provides a plausible and potentially better pathway to a collective afterlife than the traditional biological pathway (i.e. there are reasons to favour this pathway and there are no good defeaters to trying it out). Both of these arguments are defended from a variety of objections and misunderstandings.

Collaboration


Dive into the John Danaher's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sr Sven Nyholm

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Noone

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Heike Felzmann

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Hogan

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Niall O'Brolchain

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Rónán Kennedy

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Shankar Kalpana

National University of Ireland

View shared research outputs
Researchain Logo
Decentralizing Knowledge