Nehal Bhuta
European University Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nehal Bhuta.
South Atlantic Quarterly | 2014
Nehal Bhuta
This paper considers the way in which recent historical work on the history of freedom of religion and freedom of conscience opens up a new interpretation of the decisions of the European Court of Human Rights in the headscarf cases. These decisions have been widely criticized as adopting a militantly secularist approach to the presence of Islamic religious symbols in the public sphere, an approach that seems inconsistent or even overtly discriminatory in light of the court’s recent decision in Lautsi that the compulsory display of crucifixes in the classroom did not breach Italy’s convention obligations. I argue that the headscarf cases turn less on the balance between state neutrality and religious belief, than on an understanding of certain religious symbols as a threat to public order and as harbingers of sectarian strife which undermine democracy.
Archive | 2016
Guglielmo Tamburrini; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres
This chapter examines the ethical reasons supporting a moratorium and, more stringently, a preemptive ban on autonomous weapons systems (AWS). Discussions of AWS presuppose a relatively clear idea of what it is that makes those systems autonomous. In this technological context, the relevant type of autonomy is task autonomy, as opposed to the personal autonomy, which usually pervades ethical discourse. Accordingly, a weapons system is regarded here as autonomous if it is capable of carrying out the task of selecting and engaging military targets without any human intervention. Since robotic and artificial intelligence technologies are crucially needed to achieve the required task autonomy in most battlefield scenarios, AWS are identified here with some sort of robotic systems. Thus, ethical issues about AWS are strictly related to technical and epistemological assessments of robotic technologies and systems, at least insofar as the operation of AWS must comply with discrimination and proportionality requirements of international humanitarian law (IHL). A variety of environmental and internal control factors are advanced here as major impediments that prevent both present and foreseeable robotic technologies from meeting IHL discrimination and proportionality demands. These impediments provide overwhelming support for an AWS moratorium – that is, for a suspension of AWS development, production and deployment
Archive | 2016
Nehal Bhuta; Susanne Beck; Robin Geiss; Hin-Yan Liu; Claus Kress
The intense and polemical debate over the legality and morality of weapons systems to which human cognitive functions are delegated (up to and including the capacity to select targets and release weapons without further human intervention) addresses a phenomena which does not yet exist but which is widely claimed to be emergent. This groundbreaking collection combines contributions from roboticists, legal scholars, philosophers and sociologists of science in order to recast the debate in a manner that clarifies key areas and articulates questions for future research. The contributors develop insights with direct policy relevance, including who bears responsibility for autonomous weapons systems, whether they would violate fundamental ethical and legal norms, and how to regulate their development. It is essential reading for those concerned about this emerging phenomenon and its consequences for the future of humanity.
Archive | 2016
Christof Heyns; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres
Introduction The ever-increasing power of computers is arguably one of the defining characteristics of our time. Computers affect almost all aspects of our lives and have become an integral part not only of our world but also of our very identity as human beings. They offer major advantages and pose serious threats. One of the main challenges of our era is how to respond to this development: to make sure computers enhance and do not undermine human objectives. The imposition of force by one individual against another has always been an intensely personal affair – a human being was physically present at the point of the release of force and took the decision that it would be done. It is inherently a highly controversial issue because of the intrusion on peoples bodies and even lives. Ethical and legal norms have developed over the millennia to determine when one human may use force against another, in peace and in war, and have assigned responsibility for violations of these norms. Perhaps the most dramatic manifestation of the rise of computer power is to be found in the fact that we are on the brink of an era when decisions on the use of force against human beings – in the context of armed conflict as well as during law enforcement, lethal and non-lethal – could soon be taken by robots. Unmanned or human-replacing weapons systems first took the form of armed drones and other remote-controlled devices, which allowed human beings to be physically absent from the battlefield. Decisions to release force, however, were still taken by human operatives, albeit from a distance. The increased autonomy in weapons release now points to an era where humans will be able to be not only physically absent from the battlefield but also psychologically absent, in the sense that computers will determine when and against whom force is released. The depersonalization of the use of force brought about by remote-controlled systems is thus taken to a next level through the introduction of the autonomous release of force.
Archive | 2018
Nehal Bhuta; Debora Valentina Malito; Gaby Umbach
There is a pervasive sense in which we seem to be living under a new avalanche of numbers, and in particular an avalanche of indicators beyond the state and purporting to create knowledge on a global scale. As much as our indicator culture engenders a “faith in numbers”, the very expansion of the power of numbers and their role in (global) governance over the last 20 years has brought with it a heightened sense that quantification, indicators, and rankings are a way of doing politics that must be engaged with from within and without the specific disciplinary knowledge (such as statistics and econometrics) that underwrite their claims to objectivity. The chapters collected in this Handbook aim to capture the contemporary indicator culture, with all its discordant and contrasting orientations. The present introductory chapter considers three main dimensions. First, no chapter in the Handbook adopts a naively metrological understanding of indicators as simply “measuring” reality. Second, the normativity of measurement is a consistent theme of the contributions by both scholars and practitioners. Third, despite their popularity and seeming capacity to shape debates, the power of indicators remains highly contextual and dependent on how they are enrolled in particular, situated, networks of actors and influence.
Archive | 2018
Debora Valentina Malito; Nehal Bhuta; Gaby Umbach
Measuring is a way of doing politics. Scholars and practitioners, who contributed to this Handbook, agreed that measuring mattered because of its instrumentality in governing. They however came to contrasting conclusions about the way forward and the volume therefore reflected this variety of discordant interpretations. Some contributions contested the effectiveness of the existing indicator culture and addressed methodological solutions inspired by standards of scientific objectivity, like a focus on performance indicators or institutional quality dimensions, improved techniques, internal validity, and reliability. Others were critical about the contemporary indicator culture because of normative premises, as well as intended and unintended consequences, ranging from the creation of distorted ontologies of the real, and the instrumental, conceptual hybridity in serving the pervasiveness of the neo-liberal paradigm of governance and the simplification of social complexity, over the external interference in the legitimacy of domestic policy decision-making, to the dissemination of corporate governance mechanisms. Many contributors proposed solutions that in their view better suited the decentralisation of governance. Their chapters did not demand pure mechanical objectivity but rather a better transformation of politics into metrics through locally embedded, disaggregated, micro-level, country-specific data and systems of knowledge creation.
Archive | 2016
Dan Saxon; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres
Introduction This chapter addresses the legal, policy and military context of the drafting of ‘Autonomy in weapon systems’, the United States Department of Defense (DoD) Policy Directive 3000.09. More specifically, the author describes the development and interpretation of Directive 3000.09s requirement that autonomous weapons systems (AWS) be designed to allow commanders and operators to exercise ‘appropriate levels of human judgment over the use of force’. The chapter compares the Directives standard with another conceptual vision for the development of autonomous functions and systems known as ‘coactive design’ or ‘human–machine interdependence’. Finally, the author argues that the increasing speed of autonomous technology – and the concomitant pressures to advance the related values of military necessity and advantage – eventually will cause the Directives standard of ‘appropriate levels of human judgment over the use of force’ to be ineffective and irrelevant. Directive 3000.09 The US government has begun to develop formal – albeit somewhat vague – policies concerning the development and use of semi-autonomous and autonomous weapons. In DoD Directive 3000.09, Ashton B. Carter, deputy secretary of defense for policy, defines ‘autonomous weapon system’ as a ‘weapon system that, once activated, can select and engage targets without further intervention by a human operator’. The drafters of the Directive defined ‘autonomous weapon systems’ as those that select and engage targets because the drafters wanted to focus on the most critical aspect of autonomy – the function of ‘lethality’ – where both human judgment and the law of armed conflict (currently) apply. This chapter adopts this definition of autonomous systems for the purposes of the discussion. The Directive defines ‘semi-autonomous weapon system’ as a ‘weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator’. Progressively, the categorization of ‘semi-autonomous’ versus ‘autonomous’ is becoming a distinction without a difference as the line between the two becomes more difficult to discern. For example, the US militarys new ‘long range anti-ship missile’ – a weapon the United States contends is semi-autonomous – can fly hundreds of miles after release by an aeroplane and identify and destroy a target without human oversight.
Archive | 2016
Sarah Knuckey; Nehal Bhuta; Susanne Beck; Robin Geis; Hin-Yan Liu; Claus Kres
Introduction The international debate around autonomous weapons systems (AWS) has addressed the potential ethical, legal and strategic implications of advancing autonomy, and analysis has offered myriad potential concerns and conceivable benefits. Many consider autonomy in selecting and engaging targets to be potentially revolutionary, yet AWS developments are nascent, and the debates are, in many respects and necessarily, heavily circumscribed by the uncertainty of future developments. In particular, legal assessments as to whether AWS might be used in compliance with the conduct of hostilities rules in international humanitarian law (IHL) are at present largely predicated upon a forecast of future facts, including about the sophistication of weapon technologies, likely capacities and circumstances of use, as well as the projected effectiveness of state control over any use. To conclusively assess legal compliance, detailed information about the AWS as developed or used would be required. However, to date, only minimal attention has been paid to transparency in the AWS context. What kinds of AWS information (if any) should governments share, with whom and on what basis? How will the international community know if autonomy is developed in critical functions and, if AWS are deployed, if they are used in compliance with international law? How might autonomy developments be monitored to enable fact-based legal analysis? Or, if autonomous targeting is prohibited or specifically regulated, how might compliance best be assured? Will existing institutions, norms or requirements for transparency be adequate? These crucial questions have not yet been debated. This chapter explores the challenge of fact-based legal assessments for autonomous systems, and proposes that the international community begin to focus directly and systematically on transparency around AWS development and use. As a step towards deepening an international AWS transparency dialogue, this chapter offers a broad framework for disentangling distinct categories of transparency information, relationships and rationales. Given the lack of an existing well-defined transparency architecture for weapons development and the use of lethal force, transparency discussions should not wait until AWS substantive debates further mature or for the international community to settle on a response to the substantive concerns raised. Rather, transparency should be analysed alongside ongoing legal, ethical and strategic debates. Without attending to transparency, states risk developing autonomy in an environment that lacks information-sharing norms designed to advance lawful weapon use and development, democratic legitimacy and states’ strategic and security interests.
Archive | 2014
Nehal Bhuta; Debora Valentina Malito; Gaby Umbach
During the last two decades numerous indicators measuring sustainability and its different dimensions have been created. The 2007 economic crisis led to increased scrutiny of public sector fiscal imbalances, and efforts to create more sophisticated measures of fiscal sustainability. The literature on this recent formulation and use of sustainability indicators is broad and contested. It however largely tends to focus on fiscal components, while wider meanings of sustainability are accounted for to a lesser degree. This working paper examines the conceptual and empirical questions relating to the production of indicators of sustainability, both in the sense of fiscal sustainability and sustainable development. It also discusses the uses of sustainability indicators.
European Journal of International Law | 2011
Nehal Bhuta
In this symposium, we publish Jeremy Waldrons article, �Are Sovereigns Entitled to the Benefit of the Rule of Law?� together with four responses, by Samantha Besson, David Dyzenhaus, Thomas Poole and Alexander Somek. Waldron is justifiably renowned as a jurisprude and theorist of the concept of the rule of law. His engagement with international law is more recent, but no less significant. In this article, he takes a familiar (perhaps even tired) question among international lawyers � can there be something akin to a rule of law in international affairs? � and recasts how we ought to think about it. With characteristically deft and plain-speaking arguments, Waldron burrows to the heart of the issue: What might it mean to speak of an �international rule of law,� and who or what are properly understood as its beneficiaries?