Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hin-Yan Liu is active.

Publication


Featured researches published by Hin-Yan Liu.


Ethics and Information Technology | 2017

Irresponsibilities, inequalities and injustice for autonomous vehicles

Hin-Yan Liu

With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement.


Archive | 2016

On banning autonomous weapons systems: from deontological to wide consequentialist reasons

Guglielmo Tamburrini; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres

This chapter examines the ethical reasons supporting a moratorium and, more stringently, a preemptive ban on autonomous weapons systems (AWS). Discussions of AWS presuppose a relatively clear idea of what it is that makes those systems autonomous. In this technological context, the relevant type of autonomy is task autonomy, as opposed to the personal autonomy, which usually pervades ethical discourse. Accordingly, a weapons system is regarded here as autonomous if it is capable of carrying out the task of selecting and engaging military targets without any human intervention. Since robotic and artificial intelligence technologies are crucially needed to achieve the required task autonomy in most battlefield scenarios, AWS are identified here with some sort of robotic systems. Thus, ethical issues about AWS are strictly related to technical and epistemological assessments of robotic technologies and systems, at least insofar as the operation of AWS must comply with discrimination and proportionality requirements of international humanitarian law (IHL). A variety of environmental and internal control factors are advanced here as major impediments that prevent both present and foreseeable robotic technologies from meeting IHL discrimination and proportionality demands. These impediments provide overwhelming support for an AWS moratorium – that is, for a suspension of AWS development, production and deployment


Archive | 2016

Autonomous Weapons Systems: Law, Ethics, Policy

Nehal Bhuta; Susanne Beck; Robin Geiss; Hin-Yan Liu; Claus Kress

The intense and polemical debate over the legality and morality of weapons systems to which human cognitive functions are delegated (up to and including the capacity to select targets and release weapons without further human intervention) addresses a phenomena which does not yet exist but which is widely claimed to be emergent. This groundbreaking collection combines contributions from roboticists, legal scholars, philosophers and sociologists of science in order to recast the debate in a manner that clarifies key areas and articulates questions for future research. The contributors develop insights with direct policy relevance, including who bears responsibility for autonomous weapons systems, whether they would violate fundamental ethical and legal norms, and how to regulate their development. It is essential reading for those concerned about this emerging phenomenon and its consequences for the future of humanity.


Archive | 2016

Autonomous weapons systems: living a dignified life and dying a dignified death

Christof Heyns; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres

Introduction The ever-increasing power of computers is arguably one of the defining characteristics of our time. Computers affect almost all aspects of our lives and have become an integral part not only of our world but also of our very identity as human beings. They offer major advantages and pose serious threats. One of the main challenges of our era is how to respond to this development: to make sure computers enhance and do not undermine human objectives. The imposition of force by one individual against another has always been an intensely personal affair – a human being was physically present at the point of the release of force and took the decision that it would be done. It is inherently a highly controversial issue because of the intrusion on peoples bodies and even lives. Ethical and legal norms have developed over the millennia to determine when one human may use force against another, in peace and in war, and have assigned responsibility for violations of these norms. Perhaps the most dramatic manifestation of the rise of computer power is to be found in the fact that we are on the brink of an era when decisions on the use of force against human beings – in the context of armed conflict as well as during law enforcement, lethal and non-lethal – could soon be taken by robots. Unmanned or human-replacing weapons systems first took the form of armed drones and other remote-controlled devices, which allowed human beings to be physically absent from the battlefield. Decisions to release force, however, were still taken by human operatives, albeit from a distance. The increased autonomy in weapons release now points to an era where humans will be able to be not only physically absent from the battlefield but also psychologically absent, in the sense that computers will determine when and against whom force is released. The depersonalization of the use of force brought about by remote-controlled systems is thus taken to a next level through the introduction of the autonomous release of force.


Religion and Human Rights | 2011

The Meaning of Religious Symbols after the Grand Chamber Judgment in Lautsi v. Italy

Hin-Yan Liu

This Comment concerns the question whether there now exists a right of Member States parallel to the individual right to thought, conscience and religion under Article 9 and analyses the consistency of this potential development with the existing jurisprudence mandating state neutrality and impartiality. The Comment then considers the similarities and differences between manifesting a belief and symbolic speech, and the consequentially permissible restrictions that may be imposed. It will conclude by suggesting that the Grand Chamber erred in its determination of the crucifix as ‘an essentially passive symbol’ and had failed to consider this question holistically.


Law, Innovation and Technology | 2018

The power structure of artificial intelligence

Hin-Yan Liu

ABSTRACT This article argues that AI presents a two-pronged power challenge, introducing a different type of power relationship while simultaneously eroding the efficacy of existing procedures and institutions for resisting power disparities. The first prong of the challenge is analysed as consisting of three levels of power (roughly mapping onto the radical view of power proposed by Steven Lukes), namely: (i) power exercised over the individual or groups in mundane spheres of activity where certain kinds of everyday decision-making may be displaced; (ii) power impacting upon the trajectories of societal development and hence impinging upon human rights, values, and aspirations, and their track-dependencies; and (iii) power involving existential threats to humanity. The second prong of the challenge is addressed with reference to the tendency of AI both to provoke a sense of human inferiority and to erode our means of checking power. This illustrates some of the shortcomings of our existing systems which have not been revealed because they have not been tested in such a manner. Concluding, it is suggested that the focus upon responding to and regulating AI might be either overly specific or missing an important point. Rather, if the core challenges posed by AI are viewed as problems of power, this will not only unify hitherto divergent responses but also shield us from the technological dazzle that prevents us from seeing these problems clearly.


Ethics and Information Technology | 2017

From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

Hin-Yan Liu; Karolina Zawieska

As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility (meaning a lack of responsibility) is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use. From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated, and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans (in a negative, defensive, sense) but also ‘strengthen’ people against the challenges introduced by robotics and AI (in a positive, empowering, manner) [This distinction parallels Berlin’s negative and positive concepts of liberty (Berlin, in Liberty, Oxford University Press, Oxford, 2002)], by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place.


Archive | 2016

A human touch: autonomous weapons, DoD Directive 3000.09 and the interpretation of ‘appropriate levels of human judgment over the use of force’

Dan Saxon; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres

Introduction This chapter addresses the legal, policy and military context of the drafting of ‘Autonomy in weapon systems’, the United States Department of Defense (DoD) Policy Directive 3000.09. More specifically, the author describes the development and interpretation of Directive 3000.09s requirement that autonomous weapons systems (AWS) be designed to allow commanders and operators to exercise ‘appropriate levels of human judgment over the use of force’. The chapter compares the Directives standard with another conceptual vision for the development of autonomous functions and systems known as ‘coactive design’ or ‘human–machine interdependence’. Finally, the author argues that the increasing speed of autonomous technology – and the concomitant pressures to advance the related values of military necessity and advantage – eventually will cause the Directives standard of ‘appropriate levels of human judgment over the use of force’ to be ineffective and irrelevant. Directive 3000.09 The US government has begun to develop formal – albeit somewhat vague – policies concerning the development and use of semi-autonomous and autonomous weapons. In DoD Directive 3000.09, Ashton B. Carter, deputy secretary of defense for policy, defines ‘autonomous weapon system’ as a ‘weapon system that, once activated, can select and engage targets without further intervention by a human operator’. The drafters of the Directive defined ‘autonomous weapon systems’ as those that select and engage targets because the drafters wanted to focus on the most critical aspect of autonomy – the function of ‘lethality’ – where both human judgment and the law of armed conflict (currently) apply. This chapter adopts this definition of autonomous systems for the purposes of the discussion. The Directive defines ‘semi-autonomous weapon system’ as a ‘weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator’. Progressively, the categorization of ‘semi-autonomous’ versus ‘autonomous’ is becoming a distinction without a difference as the line between the two becomes more difficult to discern. For example, the US militarys new ‘long range anti-ship missile’ – a weapon the United States contends is semi-autonomous – can fly hundreds of miles after release by an aeroplane and identify and destroy a target without human oversight.


Archive | 2016

Autonomous weapons systems and transparency: towards an international dialogue

Sarah Knuckey; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres

Introduction The international debate around autonomous weapons systems (AWS) has addressed the potential ethical, legal and strategic implications of advancing autonomy, and analysis has offered myriad potential concerns and conceivable benefits. Many consider autonomy in selecting and engaging targets to be potentially revolutionary, yet AWS developments are nascent, and the debates are, in many respects and necessarily, heavily circumscribed by the uncertainty of future developments. In particular, legal assessments as to whether AWS might be used in compliance with the conduct of hostilities rules in international humanitarian law (IHL) are at present largely predicated upon a forecast of future facts, including about the sophistication of weapon technologies, likely capacities and circumstances of use, as well as the projected effectiveness of state control over any use. To conclusively assess legal compliance, detailed information about the AWS as developed or used would be required. However, to date, only minimal attention has been paid to transparency in the AWS context. What kinds of AWS information (if any) should governments share, with whom and on what basis? How will the international community know if autonomy is developed in critical functions and, if AWS are deployed, if they are used in compliance with international law? How might autonomy developments be monitored to enable fact-based legal analysis? Or, if autonomous targeting is prohibited or specifically regulated, how might compliance best be assured? Will existing institutions, norms or requirements for transparency be adequate? These crucial questions have not yet been debated. This chapter explores the challenge of fact-based legal assessments for autonomous systems, and proposes that the international community begin to focus directly and systematically on transparency around AWS development and use. As a step towards deepening an international AWS transparency dialogue, this chapter offers a broad framework for disentangling distinct categories of transparency information, relationships and rationales. Given the lack of an existing well-defined transparency architecture for weapons development and the use of lethal force, transparency discussions should not wait until AWS substantive debates further mature or for the international community to settle on a response to the substantive concerns raised. Rather, transparency should be analysed alongside ongoing legal, ethical and strategic debates. Without attending to transparency, states risk developing autonomy in an environment that lacks information-sharing norms designed to advance lawful weapon use and development, democratic legitimacy and states’ strategic and security interests.


Archive | 2016

Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapons systems

Hin-Yan Liu; Nehal Bhuta; Susanne Beck; Robin Geis; Claus Kres

Collaboration


Dive into the Hin-Yan Liu's collaboration.

Top Co-Authors

Avatar

Nehal Bhuta

European University Institute

View shared research outputs
Top Co-Authors

Avatar

Karolina Zawieska

Industrial Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guglielmo Tamburrini

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neha Jain

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge