Should Artificial Intelligence Governance be Centralised? Design Lessons from History
SShould Artificial Intelligence Governance be Centralised?Design Lessons from History
Peter Cihon ∗ Centre for the Governance of AI,Future of Humanity Institute,University of [email protected]
Matthijs M. Maas ∗ CILCC, Faculty of Law,University of Copenhagen &Centre for the Governance of AI,Future of Humanity Institute,University of [email protected]
Luke Kemp ∗ Centre for the Study of ExistentialRisk, University of [email protected]
ABSTRACT
Can effective international governance for artificial intelligence re-main fragmented, or is there a need for a centralised internationalorganisation for AI? We draw on the history of other internationalregimes to identify advantages and disadvantages in centralising AIgovernance. Some considerations, such as efficiency and politicalpower, speak in favour of centralisation. Conversely, the risk ofcreating a slow and brittle institution speaks against it, as does thedifficulty in securing participation while creating stringent rules.Other considerations depend on the specific design of a centralisedinstitution. A well-designed body may be able to deter forum shop-ping and ensure policy coordination. However, forum shoppingcan be beneficial and a fragmented landscape of institutions canbe self-organising. Centralisation entails trade-offs and the detailsmatter. We conclude with two core recommendations. First, theoutcome will depend on the exact design of a central institution. Awell-designed centralised regime covering a set of coherent issuescould be beneficial. But locking-in an inadequate structure maypose a fate worse than fragmentation. Second, for now fragmenta-tion will likely persist. This should be closely monitored to see if itis self-organising or simply inadequate.
ACM Reference Format:
Peter Cihon, Matthijs M. Maas, and Luke Kemp. 2020. Should ArtificialIntelligence Governance be Centralised? Design Lessons from History. In
Proceedings of Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics,and Society (AIES ’20).
ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3375627.3375857
In 2018, Canada and France proposed the International Panel onArtificial Intelligence (IPAI). After being rejected at the G7 in 2019,negotiations shifted to the OECD and are presently ongoing. Asthe field of AI continues to mature and spark public interest andlegislative concern [41], the priority of such governance initiatives ∗ Equal contribution, order selected at random.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).
AIES ’20, February 7–8, 2020, New York, NY, USA © 2020 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-7110-0/20/02.https://doi.org/10.1145/3375627.3375857 reflects the growing appreciation that AI has the potential to dra-matically change the world for both good and ill [9]. Research intoAI governance needs to keep pace with policy-making and techno-logical change. Choices made today may have long-lasting impactson policymakers’ ability to address numerous AI policy problems[7]. Effective governance can promote safety, accountability, and re-sponsible behaviour in the research, development, and deploymentof AI systems.AI governance research to date has predominantly focused at thenational and sub-national levels [6, 16, 44]. Research into AI global governance remains relatively nascent (though see [5]). Kemp et al.[24] have called for specialised, centralised intergovernmental agen-cies to coordinate policy responses globally, and others have calledfor a centralised ‘International Artificial Intelligence Organisation’[14]. Others favour more decentralised arrangements based around‘Governance Coordinating Committees’, global standards, or exist-ing international law instruments [8, 28, 47].No one has taken a step back to inquire: what would the historyof multilateralism suggest, given the state and trajectory of AI?Should AI governance be centralised or decentralised? ‘Centralisa-tion’, in this case, refers to the degree to which the coordination,oversight and/or regulation of a set of AI policy issues or technolo-gies are housed under a single (global) institution. This is not abinary choice; it exists across a spectrum. Trade is highly (but notentirely) centralised under the umbrella of the WTO. In contrast,environmental multilateralism is much more decentralised.In this paper, we seek to help the community of researchers,policymakers, and other stakeholders in AI governance understandthe advantages and disadvantages of centralisation. This may helpset terms and catalyse a much-needed debate to inform governancedesign decisions. We first outline the international governancechallenges of AI, and review early proposed global responses. Wethen draw on existing literatures on regime fragmentation [3] and‘regime complexes’ [36] to assess considerations in centralisingthe international governance of AI. We draw on the history ofother international regimes to identify considerations that speakin favour or against designing a centralised regime complex forAI. We conclude with two recommendations. First, many trade-offsare contingent on how well-designed a central body would be. Anadaptable, powerful institution with a manageable mandate wouldbe beneficial, but a poorly designed body could prove a fate worse A regime is a set of ‘implicit or explicit principles, norms, rules and decision-makingprocedures around which actors’ expectations converge in a given area of internationalrelations’[27, p.186]. a r X i v : . [ c s . C Y ] J a n han fragmentation. Second, for now there should be structuredmonitoring of existing efforts to see whether they are they areself-organising or insufficient. There is debate as to whether AI is a single policy area or a diverseseries of issues. Some claim that AI cannot be cohesively regulatedas it is a collection of disparate technologies, with different riskprofiles across different applications and industries [45]. This is animportant but not entirely convincing objection. The technical fieldhas no settled definition for ‘AI’, so it should be no surprise thatdefining a manageable scope for AI governance will be difficult. Yetthis challenge is not unique to AI: definitional issues abound in areassuch as environment and energy, but have not figured prominentlyin debates over centralisation. Indeed, energy and environmentministries are common at the domestic level, despite problems insetting the boundaries of natural systems and resources.We contend that there are numerous ways in which a centralisedbody could be designed for AI governance. For example, a cen-tralised approach could carve out a subset of interlinked AI is-sues to cover. This could involve focusing on the potentially high-risk applications of AI systems, such as AI-enabled cyberwarfare,lethal autonomous weapons (LAWS), other advanced military ap-plications, or high-level machine intelligence (HLMI). Anotherapproach could govern underlying hardware resources (e.g. large-scale compute resources) or software libraries. We are agnostic onthe specifics of how centralisation could or should be implemented,and instead focus on the costs and benefits of centralisation in theabstract. The exact advantages and disadvantages of centralisationare likely to vary depending on the institutional design. This isan important area of further study, particularly once more specificproposals are put forward. However, such work must be groundedin a higher-level investigation of trade-offs in centralising AI gov-ernance. It is this foundational analysis which we seek to offer.Numerous AI issues could benefit from international cooperation.These include the potentially catastrophic applications mentionedabove. It also encompasses more quotidian uses, such as AI-enabledcybercrime; human health applications; safety and regulation ofautonomous vehicles and drones; surveillance, privacy and data-use; and labour automation. Multilateral coordination could alsouse AI to tackle other global problems such as climate change [43],or help meet the Sustainable Development Goals [46]. This is anillustrative but not exhaustive list of international AI policy issues.Global regulation across these issues is currently nascent, frag-mented, yet evolving. A wide range of UN institutions have be-gun to undertake some activities on AI [20]. The bodies coveringAI policy issues range across existing organisations including theInternational Labour Organisation (ILO), International Telecom-munication Union (ITU), and UNESCO. This is complemented bybudding regulations and working groups across the InternationalOrganisation for Standardisation (ISO), International Maritime Or-ganisation (IMO), International Civil Aviation Organisation (ICAO), We define ‘AI’ as any machine system capable of functioning ‘appropriately and withforesight in its environment’ [34, p.13]; see too [9, p.5]. ‘High-level machine intelligence’ has been defined as ‘unaided machines [that] canaccomplish every task better and more cheaply than human workers’ [17, p.1]. and other bodies, as well as treaty amendments, such as the up-dating of the Vienna Convention on Road Traffic to encompassautonomous vehicles [28], or the ongoing negotiations at the Con-vention on Certain Conventional Weapons (CCW) on LAWS. TheUN System Chief Executives Board (CEB) for Coordination throughthe High-Level Committee on Programmes has been empowered todraft a system-wide AI capacity building strategy. The High-levelPanel on Digital Cooperation has also sought to gather togethercommon principles and ideas for AI relevant areas [19]. Whetherthese initiatives bear fruit, however, remains questionable, as manyof the involved international organisations have fragmented mem-bership, were not originally created to address AI issues and lackeffective enforcement or compliance mechanisms [32, p.2].The trajectory of these initiatives matters. How governance isinitially organised can be central to its success. Debates over cen-tralisation and fragmentation are long-lasting and prominent withgood reason. How we structure international cooperation can becritical to its success, and most other debates often implicitly hingeon structural debates. Fragmentation and centralisation exist acrossa spectrum. In a world lacking a global government, some fragmen-tation will always prevail. But the degree to which it prevails iscrucial. We define ‘fragmentation’ as a patchwork of internationalorganisations and institutions which focus on a particular issuearea, but differ in scope, membership and often rules [3, p.16]. Wedefine centralisation as an arrangement in which governance of aparticular issue lies under the authority of a single umbrella body. Aregime complex is a network of three or more international regimeson a common issue area. These should have overlapping member-ship and cause potentially problematic interactions [36, p.29]. Thesedefinitions and terms are by nature normatively loaded. For exam-ple, some may find ‘decentralisation’ to be a positive framing, whileothers may see ‘fragmentation’ to possess negative connotations.Recognising this, we seek to use these terms in a primarily analyticalmanner. We will use findings from each of these theoretical areas toinform our discussion of the history of multilateral fragmentationand its implications for AI governance. In the following discussion, we explore a series of considerations forAI governance. Political power and efficient participation supportcentralisation. The breadth vs. depth dilemma, as well as slownessand brittleness support decentralisation. Policy coordination andforum shopping considerations can cut both ways.
Regimes embody power in their authority over rules, norms, andknowledge beyond states’ exclusive control. A more centralisedregime will see this power concentrated among fewer institutions.A centralised, powerful architecture is likely to be more influentialagainst competing international organisations and with constituentstates [36, pp.36-7].An absence of centralised authority to manage regime complexeshas presented challenges in the past. Across the proliferation ofMultilateral Environmental Agreements (MEAs) there is no require-ment to cede responsibility to the UN Environmental Programmen the case of overlap or competition. This has led to turf wars,inefficiencies and even contradictory policies [3]. One of the mostnotable examples is that of hydrofluorocarbons (HFCs). HFCs arepotent greenhouse gases, and yet their use has been encouragedby the Montreal Protocol since 1987 as a replacement for ozone-depleting substances. This has only recently been resolved via the2015 Kigali Amendment to the Montreal Protocol, which itself hasa prolonged implementation period. Similarly, the internet gover-nance regime complex is diffuse. Multiple venues and norms governtechnical standards, cyber crime, human rights, and warfare [35].Although the UN Internet Governance Forum (IGF) discusses sev-eral cross-cutting issues, it does not have a mandate to consolidateeven principles, let alone negotiate new formal agreements [33].In contrast, other centralised regimes have supported effectivemanagement. For example, under the umbrella of the WTO, normssuch as the most-favoured-nation principle (equally treating allWTO member states) principle have become the bedrock of in-ternational trade. The power and track-record of the WTO is soformidable that it has created a chilling effect: the fear of collid-ing with WTO norms and rules has led environmental treaties toself-censor and actively avoid discussing or deploying trade-relatedmeasures [12]. Both the chilling effect and the remarkably powerfulapplication of common trade rules were not a marker of interna-tional trade until the establishment of the WTO. The power ofthese centralised body has stretched beyond influencing states inthe domain of trade, to moulding related issues.Political power offers further benefits in governing emergingtechnologies that are inherently uncertain in both substance andpolicy impact. Uncertainty in technology and preferences has beenassociated with some increased centralisation in regimes [25]. Theremay also be benefits to housing a foresight capacity within theregime complex, to allow for accelerated or even proactive efforts[39]. Centralised AI governance would enable an empowered organ-isation to more effectively use foresight analyses to inform policyresponses across the regime complex.
Decentralised AI governance may undermine efficiency and inhibitparticipation. States often create centralised regimes to reduce costs,for instance by eliminating duplicate efforts, yielding economies ofscale within secretariats, and simplifying participation [15]. Con-versely, fragmented regimes may force states to spread resourcesand funding over many distinct institutions, particularly limitingthe ability of less well-resourced states or parties to participate fully[32, p.2].Historically, decentralised regimes have presented cost and re-lated participation concerns. Hundreds of related and sometimesoverlapping international environmental agreements can create‘treaty congestion’ [1]. This complicates participation and imple-mentation for both developed and developing nations [15]. Thisincludes costs associated with travel to different forums, monitor-ing and reporting for a range of different bodies, and duplication ofeffort by different secretariats (ibid.).Similar challenges are already being witnessed in AI governance.Simultaneous and globally distributed meetings pose burdensomeparticipation costs for civil society. Fragmented organisations must duplicatively invest in high-demand machine learning subject mat-ter experts to inform their activities. Centralisation would supportinstitutional efficiency and participation.
One potential problem of centralisation lies in the relatively slowprocess of establishing centralised institutions, which may often beoutpaced by the rate of technological change. Another challengelies in centralised institutions’ brittleness after they are established,i.e., their vulnerability to regulatory capture, or failure to react tochanges in the problem landscape.Establishing new international institutions is often a slow pro-cess. For example, the Kyoto Protocol took three years of negoti-ations to create and then another eight to enter into force. Thisbecomes even more onerous with higher participation and stakes.Under the GATT, negotiations for a 26% cut in tariffs between 19countries took 8 months in 1947. The Uruguay round, beginning in1986, took 91 months to achieve a tariff reduction of 38% between125 parties [31]. International law has been quick to respond to tech-nological changes in some cases, and delayed in others [42, p.184].Decentralised efforts may prove quicker to respond to complex,‘transversal’ issues, if they rely more on informal institutions witha smaller but like-minded membership [32, pp.2-3]. CentralisedAI governance may be particularly vulnerable to sparking lengthynegotiations, because progress on centralised regimes for new tech-nologies tends to be hard if a few states hold clearly unequal stakesin the technology, or if there are significant differences in infor-mation and expertise among states or between states and privateindustry [42, pp.187-94]. Both these conditions closely match thecontext of AI technology. Moreover, because AI technology de-velops rapidly, such slow implementation of rules and principlescould lead to certain actors taking advantage by setting de facto arrangements or extant state practice.Even after its creation, a centralised regime can be brittle ; thevery qualities that provide it with political power may exacerbatethe adverse effects of regulatory capture; the features that ensureinstitutional stability, may also mean that the institution cannotadapt quickly to unanticipated outside stressors outside its estab-lished mission. The regime might break before it bends. The firstpotential risk is regulatory capture. Given the high profile of AIissue areas, political independence is paramount. However, as il-lustrated by numerous cases, including undue corporate influencein the WHO during the 2009 H1N1 pandemic [11], no institutionis fully immune to regime capture, and centralisation may reducethe costs of lobbying, making capture easier by providing a singlelocus of influence. On the other hand, a regime complex comprisingmany parallel institutions could find itself vulnerable to capture bypowerful actors, who are better positioned than smaller parties tosend representatives to every forum.Moreover, centralised regimes entail higher stakes. Many issuesare in a single basket and thus failure is more likely to be severeif it does occur. International institutions can be notoriously path-dependent and thus fail to adjust to changing circumstances, asseen with the ILO’s considerable difficulties in reforming its partici-pation and rulemaking processes in the 1990s [2]. The public failuref a flagship global AI institution or governance effort could havelasting political repercussions. It could strangle subsequent, morewell-conceived proposals in the crib, by undermining confidencein multilateral governance generally or capable governance on AIissues specifically. By contrast, for a decentralized regime complexto similarly fail, all of its component institutions would need tosimultaneously ‘break’ or fail to innovate at once. A centralisedinstitution that does not outright collapse, but which remains inef-fective, may become a blockade against better efforts.Ultimately, brittleness is not an inherent weakness of centralisation–and indeed depends far more on institutional design details. Theremay be strategies to ‘innovation-proof’[29] governance regimes.Periodic renegotiation, modular expansion, ‘principles based reg-ulation’, or sunset clauses can also support ongoing reform [30,pp.29-30]. Such approaches have often proved successful histor-ically, due partially to decentralisation but, importantly, also toparticular designs.
Pursuing centralisation may create an overly high threshold thatlimits participation. All multilateral agreements face a trade-offbetween having higher participation (‘breadth’) or stricter rulesand greater ambition of commitments (‘depth’). The dilemma isparticularly evident for centralised institutions that are intended tobe powerful and require strong commitments from states.However, the opposite dynamics of sacrificing depth for breadthcan also pose risks. The 2015 Paris Agreement on Climate Changewas significantly watered down to allow for the legal participationof the US. Anticipated difficulties in ratification through the Senateled to negotiators opting for a ‘pledge and review’ structure withfew legal obligations. Thus, the US could join simply through theapproval of the executive [23]. In this case, inclusion of the US(which at any rate proved temporary) came at the cost of significantcutbacks on the demands which the regime sought to make of allparties.In contrast, decentralisation could allow for major powers toengage in relevant regulatory efforts where they would be deterredfrom signing up to a more comprehensive package. This has prece-dence in the history of climate governance. Some claim that theUS-led Asia-Pacific Partnership on Clean Development and Climatehelped, rather than hindered climate governance, as it bypassedUNFCCC deadlock and secured non-binding commitments fromactors not bound by the Kyoto Protocol [49, pp.259-60].This matters, as buy-in may prove a thorny issue for AI gover-nance. The actors who lead in AI development include powerfulstates that are potentially most adverse to global regulation in thisarea. They have thus far proved recalcitrant in the global gover-nance of security issues such as anti-personnel mines or cyberwar-fare. In response, some have already recommended a critical-massgovernance approach to the military uses of AI. Rather than seekinga comprehensive agreement, devolving and spinning off certaincomponents into separate treaties (e.g. for LAWS testing standards;liability and responsibility; and limits to operational usage) couldinstead allow for the powerful to ratify and move forward at leasta few of those options [48]. We thank Nicolas Moës for this observation.
The breadth vs. depth dilemma is a trade-off in multilateral-ism generally. However, it is a particularly pertinent challengefor centralisation. The key benefit of a centralised body wouldbe to be a powerful anchor that ensures policy coordination andcoherence, without suffering fragmentation in membership. Thisdilemma suggests it is unlikely to have both. It will likely need torestrict membership to have teeth, or lose its teeth to have wideparticipation. A critical mass approach may be able to deliver thebest of both worlds. Nonetheless these dilemma poses a difficultknot for centralisation to unravel.
Forum shopping may help or hinder AI governance, dependingon the particular circumstances. Fragmentation enables actors tochoose where and how to engage. Such ‘forum shopping’ may takeone of several forms: moving venues, abandoning one organisation,creating new venues, and working across multiple organisations tosew competition between them [4]. Even when there is a naturalvenue for an issue, actors have reasons to forum-shop. For instance,states may look to maximise their influence, appease domesticpressure [40] and placate constituents by shifting to a toothlessforum [18].The ability to successfully forum-shop depends on an actor’spower. Most successful examples of forum-shifting have been ledby the US [4]. Intellectual property rights in trade, for example,was subject to prolonged, contentious forum shopping. Developedstates resisted attempts of the UN Conference on Trade and De-velopment (UNCTAD) to address intellectual property rights intrade by trying to push them onto the World Intellectual PropertyOrganization (WIPO) (ibid., 566) and then subsequently to the WTO[18], overruling protests from developing states. Outcomes oftenreflect power, but weak states and non-state actors can also pursueforum shopping strategies in order to challenge the status-quo [22].Forum shopping may help or hurt governance. This is evident incurrent efforts to regulate LAWS. While the Group of GovernmentalExperts has made some progress, on the whole the CCW has takenslow deliberations on LAWS. In response, frustrated activists havethreatened to shift to another forum, as happened with the OttawaTreaty that banned landmines [10]. This strategy could catalyseprogress, but also brings risks of further forum shopping and weakor unimplemented agreements. Forum shopping may similarly de-lay, stall, or weaken regulation of time-sensitive AI policy issues,including potential future HLMI development. It is plausible thatleading AI firms also have sway when they elect to participate insome venues but not others. The OECD Expert Group on AI in-cluded representatives from leading firms, whereas engagement atUN efforts, including the Internet Governance Forum (IGF), do notappear to be similarly prioritised. A decentralised regime will en-able forum shopping, though further work is needed to determinewhether this will help or hurt governance outcomes on the whole.
There are good reasons to believe that either centralisation or frag-mentation could enhance coordination. A centralised regime canenable easier coordination both across and within policy issues,acting as a focal point for states. Others argue that this is not alwayshe case, and that fragmentation can mutually supportive and evenmore creative institutions.Centralisation reduces the occurrence of conflicting mandatesand enables communication. These are the ingredients for policycoherence. As noted previously, the WTO has been remarkablysuccessful in ensuring coherent policy and principles across therealm of trade, and even into other areas such as the environment.However, fragmented regimes can often act as complex adaptivesystems. Political requests and communication between secretariatsoften ensures bottom-up coordination even in the absence of central-isation. Multiple organisations have sought to reduce greenhousegas emissions within their respective remits, often at the behestof the UNFCCC Conference of Parties. When effective, bottom-upcoordination can slowly evolve into centralisation. Indeed, this wasthe case for the GATT and numerous regional, bilateral and sectoraltrade treaties, which all coalesced together into the WTO. Whilethis organic self-organisation has occurred, it has taken decades,with forum shopping and inaction prevailing for many years.Indeed, some have argued that decentralisation does not justdeliver ‘good enough’ global governance [38] that reflects a demandfor diverse principles in a multipolar world. Instead, they argue‘polycentric’ governance approaches [37] may be more creative andlegitimate than centrally coordinated regimes. Arguments in favourof polycentricity include the notion that it enables governance initia-tives to begin having impacts at diverse scales, and that it enablesexperimentation with diverse policies and approaches, learningfrom experience and best practices (ibid., 552). Consequently, thesescholars assume âĂIJthat the invisible hand of a market of institu-tions leads to a better distribution of functions and effectsâĂİ [50,p.7].It is unclear if the different bodies covering AI issues will self-organise or collide. Many of the issues are interdependent andwill need to be addressed in tandem. Some particular policy-levers,such as regulating computing power or data, will impact almostall use areas, given that AI progress and use is closely tied to suchinputs. Numerous initiatives on AI and robotics are displaying loosecoordination [28], but it remains uncertain whether the virtuesof a free market of governance will prevail here. Great powerscan exercise monopsony-like influence in forum shopping, and thesupply of both computing power and machine learning expertise arehighly concentrated. In sum, centralisation can reduce competitionand enhance coordination, but it may suffocate the creative self-organisation of more fragmented arrangements over time.
The multilateral track record and peculiarities of AI yield sugges-tions and warnings for the future. A centralised regime could lowercosts, support participation, and act as a powerful new linchpinwithin the international system. Yet centralisation presents risks forAI governance. It could simply produce a brittle dinosaur, of sym-bolic value but with little meaningful impact on underlying politicalor technological issues. A poorly executed attempt could lock-in apoorly designed centralised body: a fate worse than fragmentation.Accordingly, ongoing efforts at the UN, OECD, and elsewhere could benefit from addressing the considerations presented in this paper,a summary of which is presented in Appendix A.
Structure is not a panacea. Specific provisions such as agendas anddecision-making procedures matter greatly, as do the surroundingpolitics. Underlying political will may be impacted by framing orconnecting policy issues [26, pp.770-1]. The success of a regime isnot just a result of fragmentation, but of design details.Moreover, institutions can be dynamic and broaden over time bytaking in new members, or deepen in strengthening commitments.Successful multilateral efforts, such as trade and ozone depletion,tend to do both. We are in the early days of global AI governance.Decisions taken early on will constrain and partially determine thefuture path. This dependency can even take place across regimes.The Kyoto Protocol was largely shaped by the targets and timeta-bles approach of the Montreal Protocol, which in turn drew fromthe Convention on Long-range Transboundary Air Pollution. Thistargets and timetables approach continues today in the way thatmost countries frame their climate pledges to the Paris Agreement.The choices we make on governing short-term AI challenges willlikely shape the management of other policy issues in the long term[7].On the other hand, committing to centralisation, even if success-ful, may amount to solving the wrong problem. The problem maynot be structural, but geopolitical. Centralisation could even exac-erbate the problem by diluting scarce political attention, incurringheavy transaction costs, and shifting discussions away from bodieswhich have accumulated experience and practice [21]. For example,the Bretton Woods Institutions of the IMF and World Bank, joinedlater by the WTO, are centralised regimes that engender power.However, those institutions had the express support of the US andmay have simply manifested state power in institutional form. Ef-forts to ban LAWS and create a cyberwarfare convention have beenbroadly opposed by states with an established technological supe-riority in these areas [13]. A centralised regime may not unpickthese power struggles, but just add a layer of complexity.
Our framework provides a tool for policy-makers to inform theirdecisions of whether to join, create, or forgo new institutions thattackle AI policy problems. For instance, the recent choice of whetherto support the creation of an independent IPAI involved these con-siderations. Following the US veto, ongoing negotiations for itsreplacement at the OECD may similarly benefit from their consider-ation. For now, it is worth closely monitoring the current landscapeof AI governance to see if it exhibits enough policy coordinationand political power to effectively deal with mounting AI policyproblems. While there are promising initial signs [28] there are alsoalready growing governance failures in LAWS, cyberwarfare, andelsewhere.We outline a suggested monitoring method in Table 1. Thereare three key areas to monitor: conflict, coordination, and catalyst.First, conflict should measure the extent to which principles, rules,regulations and other outcomes from different bodies in the AI able 1: Regime Complex Monitoring Suggestions
Key theme Question Methods
Conflict To what extent are regimes’ principles and outputs in opposition over time? Expert and practitioner surveyNetwork analysis (e.g, citation network clustering and centrality)Natural Language Processing (e.g., entailment and fact checking)Coordination Are regimes taking steps to complement each other?Catalyst Is the regime complex self-organizing to proactively fill governance gaps? regime complex undermine or contradict each other or are in ten-sion either in their principles or goals. Second, coordination seeksto measure the proactive steps that AI-related regimes take to workwith each other. This includes liaison relationships, joint initiatives,as well as the extent to which their rules, outputs and principlestend to reinforce one another. Third, catalyst raises the importantquestion of governance gaps: is the regime complex self-organisingto proactively address international AI policy problems? NumerousAI policy problems currently have no clear coverage under interna-tional law, including AI-enabled cyber warfare and HLMI. Whetherthis changes is of vital importance.These areas require investigation through multiple methods.Qualitative surveys of relevant organisations and actors can yielddata on expert perceptions of these questions. Surveys can be aug-mented with quantitative methods, including network analysesof the regime complex relations [36, p.32]. Natural language pro-cessing could be used to examine contradictions and similaritiesbetween different regime outputs, e.g., statements, meeting min-utes, and more. Monitoring the outcomes of fragmentation can helpto determine whether centralisation is needed. One way forwardwould be to empower the OECD AI Policy Observatory or the UNCEB to regularly review the monitoring outcomes. This could in-form a democratic discussion and decision of whether to centraliseAI governance further.Our framework and discussion may also be useful for non-stateactors. Researchers and leading AI firms can play an important rolein sharing technical expertise and informing forecasts of new policyproblems on the horizon. The considerations may benefit their deci-sions of where to engage. Civil society has a key role as participants,watch-dogs, and catalysts. For example, the Campaign to Stop KillerRobots has sought to boost engagement and support for a LAWSban within the CCW. Given prolonged delays and a pessimistic out-look, some have articulated a strategy of creating an entirely newforum for the ban, inspired by the Ottawa Treaty which outlawedlandmines. Our framework can help reveal the potential virtues(allowing for progress while avoiding high-threshold deadlocks)and vices (enabling forum shopping) of such an approach. It couldeven help inform the structure of a future international institution,such as allowing for a modular, flexible structure with ‘critical mass’agreements. One cross-cutting consideration is clear: a fracturedregime sees higher participation costs that may threaten to excludemany civil society organisations altogether.The international governance of AI is nascent and fragmented.Centralisation under a well-designed, modular, ‘innovation-proof’framework organisation may be a desirable solution. However, sucha move must be approached with caution. How to define its scopeand mandate is one problem. Ensuring a politically-acceptable and well-designed body is perhaps a more daunting one. It risks cement-ing in place a fate worse than fragmentation. Monitoring conflictand coordination in the current AI regime complex, and whethergovernance gaps are filled, is a prudent way of knowing whetherthe existing structure can suffice. For now we should closely watchthe trajectory of both AI technology and its governance initiativesto determine whether centralisation is worth the risk.
ACKNOWLEDGMENTS
The authors would like to express thanks to Seth Baum, HaydnBelfield, Jessica Cussins-Newman, Martina Kunz, Jade Leung, Nico-las Moës, Robert de Neufville, and Nicolas Zahn for valuable com-ments. Any remaining errors are our own. No conflict of interest isidentified.
REFERENCES [1] Don Anton. 2012. ’Treaty Congestion’ in International Environmental Law.In
Routledge Handbook of International Environmental Law
ILR Review
65, 2 (April 2012), 195–224.https://doi.org/10.1177/001979391206500201[3] Frank Biermann, Philipp Pattberg, Harro van Asselt, and Fariborz Zelli. 2009. TheFragmentation of Global Governance Architectures: A Framework for Analysis.
Global Environmental Politics
9, 4 (Oct. 2009), 14–40. https://doi.org/10.1162/glep.2009.9.4.14[4] John Braithwaite and Peter Drahos. 2000.
Global Business Regulation . CambridgeUniversity Press, Cambridge. Google-Books-ID: DcEEW5OGWLcC.[5] James Butcher and Irakli Beridze. 2019. What is the State of Artificial IntelligenceGovernance Globally?
The RUSI Journal
UCDavis Law Review
51 (2017), 37. https://lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_Calo.pdf[7] Stephen Cave and Seán S. ÓhÉigeartaigh. 2019. Bridging near- and long-termconcerns about AI.
Nature Machine Intelligence
1, 1 (Jan. 2019), 5. https://doi.org/10.1038/s42256-018-0003-2[8] Peter Cihon. 2019.
Standards for AI Governance: International Standards to EnableGlobal Coordination in AI Research & Development
AI Governance: A Research Agenda
POLITICO
European Journal of International Law
22, 4 (Nov. 2011), 1089–1113. https://doi.org/10.1093/ejil/chr093[12] Robyn Eckersley. 2004. The Big Chill: The WTO and Multilateral EnvironmentalAgreements.
Global Environmental Politics
4, 2 (May 2004), 24–50. https://doi.org/10.1162/152638004323074183[13] Mette Eilstrup-Sangiovanni. 2018. Why the World Needs an InternationalCyberwar Convention.
Philosophy & Technology
31, 3 (Sept. 2018), 379–407.https://doi.org/10.1007/s13347-017-0271-514] Olivia J Erdelyi and Judy Goldsmith. 2018. Regulating Artificial Intelligence:Proposal for a Global Solution. In
Proceedings of the 2018 AAAI / ACM Conferenceon Artificial Intelligence, Ethics and Society . AAAI, Palo Alto, CA, 7. https://par.nsf.gov/servlets/purl/10066933[15] Daniel C Esty and Maria H Ivanova. 2002. Revitalizing Global EnvironmentalGovernance: A Function-Driven Approach. In
Global Environmental Governance:Options & Opportunities , Daniel C Esty and Maria H Ivanova (Eds.). Yale Schoolof Forestry and Environmental Studies, New Haven, CT. https://environment.yale.edu/publication-series/documents/downloads/a-g/esty-ivanova.pdf[16] Urs Gasser and Virgilio A.F. Almeida. 2017. A Layered Model for AI Governance.
IEEE Internet Computing
21, 6 (Nov. 2017), 58–62. https://doi.org/10.1109/MIC.2017.4180835[17] Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. 2018.When will AI exceed human performance? Evidence from AI experts.
Journal ofArtificial Intelligence Research
62 (2018), 729–754.[18] Laurence Helfer. 2004. Regime Shifting: The TRIPs Agreement and New Dynamicsof International Intellectual Property Lawmaking.
Yale Journal of InternationalLaw
29 (Jan. 2004), 1–83. https://scholarship.law.duke.edu/faculty_scholarship/2014[19] on Digital Cooperation High-Level Panel. 2019. The Age of Digital Interdepen-dence Report.
UN Secretary General (2019).[20] ITU. 2019.
United Nations Activities on Artificial Intelligence (AI) 2019
Environment: Science and Policy for Sustainable Development
42, 9 (Nov. 2000), 44–45. https://doi.org/10.1080/00139150009605765[22] Joseph Jupille, Walter Mattli, and Duncan Snidal. 2013.
Institutional Choice andGlobal Commerce . Cambridge University Press, Cambridge. OCLC: 900490808.[23] Luke Kemp. 2017. US-proofing the Paris Climate Agreement.
Climate Policy
International Organization
55, 4 (2001), 1051–1082. https://doi.org/10.1162/002081801317193691[26] Barbara Koremenos, Charles Lipson, and Duncan Snidal. 2001. The RationalDesign of International Institutions.
International Organization
55, 4 (2001),761–799. https://doi.org/10.1162/002081801317193592[27] Stephen D. Krasner. 1982. Structural Causes and Regime Consequences: Regimesas Intervening Variables.
International Organization
36, 2 (1982), 185–205. https://doi.org/10.1017/S0020818300018920[28] Martina Kunz and Seán ÓhÉigeartaigh. 2020. Artificial Intelligence and Robo-tization. In
Oxford Handbook on the International Law of Global Security ,Robin Geiss and Nils Melzer (Eds.). Oxford University Press, Oxford. https://papers.ssrn.com/abstract=3310421[29] Matthijs M. Maas. 2019. Innovation-Proof Governance for Military AI? how Ilearned to stop worrying and love the bot.
Journal of International HumanitarianLegal Studies
10, 1 (2019), 129–157. https://doi.org/10.1163/18781527-01001006[30] Gary E Marchant, Braden R Allenby, and Joseph R Herkert. 2011.
The growinggap between emerging technologies and legal-ethical oversight: The pacing problem .Vol. 7. Springer Science & Business Media, Berlin.[31] Will Martin and Patrick Messerlin. 2007. Why is it so difficult? Trade liberalizationunder the Doha Agenda.
Oxford Review of Economic Policy
23, 3 (2007), 347–366.[32] JeanâĂŘFrÃľdÃľric Morin, Hugo Dobson, Claire Peacock, MiriamPrysâĂŘHansen, Abdoulaye Anne, Louis BÃľlanger, Peter Dietsch, JuditFabian, John Kirton, Raffaele Marchetti, Simone Romano, Miranda Schreurs,Arthur Silve, and Elisabeth Vallet. 2019. How Informality Can Address EmergingIssues: Making the Most of the G7.
Global Policy
10, 2 (May 2019), 267–273.https://doi.org/10.1111/1758-5899.12668[33] Milton Mueller, John Mathiason, and Hans Klein. 2007. The Internet and GlobalGovernance: Principles and Norms for a New Regime.
Global Governance
13, 2(2007), 237–254. https://heinonline.org/HOL/P?h=hein.journals/glogo13&i=245[34] Nils J. Nilsson. 2009.
The Quest for Artificial Intelligence (1 edition ed.). CambridgeUniversity Press, Cambridge ; New York.[35] Joseph S. Nye. 2014.
The Regime Complex for Managing Global Cyber Activities .Technical Report 1. Global Commission on Internet Governance. https://dash.harvard.edu/bitstream/handle/1/12308565/Nye-GlobalCommission.pdf[36] Amandine Orsini, Jean-Frédéric Morin, and Oran Young. 2013. Regime Complexes:A Buzz, a Boom, or a Boost for Global Governance?
Global Governance: AReview of Multilateralism and International Organizations
19, 1 (Aug. 2013), 27–39.https://doi.org/10.1163/19426720-01901003[37] Elinor Ostrom. 2010. Polycentric systems for coping with collective action andglobal environmental change.
Global Environmental Change
20, 4 (Oct. 2010), 550–557. https://doi.org/10.1016/j.gloenvcha.2010.07.004[38] Stewart Patrick. 2014. The Unruled World: The Case for Good Enough GlobalGovernance.
Foreign Affairs
93, 1 (2014), 58–73.[39] Eleonore Pauwels. 2019.
The New Geopolitics of Converging Risks: The UN andPrevention in the Era of AI . Technical Report. United Nations University - Centrefor Policy Research. 83 pages. https://i.unu.edu/media/cpr.unu.edu/attachment/3472/PauwelsAIGeopolitics.pdf[40] Saadia M. Pekkanen, Mireya SolÃŋs, and Saori N. Katada. 2007. Trading Gainsfor Control: International Trade Forums and Japanese Economic Diplomacy.
International Studies Quarterly
The AI Index 2019 Annual Report . Technical Report. AI IndexSteering Committee, Human-Centered AI Initiative, Stanford University, Stanford,CA. https://hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf[42] Colin B. Picker. 2001. A View from 40,000 Feet: International Law and theInvisible Hand of Technology.
Cardozo Law Review
23 (2001), 151–219. https://papers.ssrn.com/abstract=987524[43] David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste,Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques,Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin,S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, DemisHassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio. 2019.Tackling Climate Change with Machine Learning. arXiv:cs.CY/1906.05433[44] Matthew U. Scherer. 2016. Regulating Artificial Intelligence Systems: Risks, Chal-lenges, Competencies, and Strategies.
Harvard Journal of Law & Technology
29, 2(2016), 353–400. http://jolt.law.harvard.edu/articles/pdf/v29/29HarvJLTech353.pdf[45] Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, GregHager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus,Kevin Leyton-Brown, David Parkes, William Press, AnneLee Saxenian, JulieShah, Milind Tambe, and Astro Teller. 2016.
Artificial Intelligence and Life in 2030 .Technical Report. Stanford University, Stanford, CA. http://ai100.stanford.edu/2016-report[46] Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, VirginiaDignum, Sami Domisch, Anna FellÃďnder, Simone Langhans, Max Tegmark, andFrancesco Fuso Nerini. 2019. The role of artificial intelligence in achieving theSustainable Development Goals. arXiv:cs.CY/1905.00501[47] Wendell Wallach and Gary E Marchant. 2018. An Agile Ethical/Legal Model forthe International and National Governance of AI and Robotics. In
Proceedingsof the 2018 AAAI / ACM Conference on Artificial Intelligence, Ethics and Society
SlateMagazine (Dec. 2014). https://slate.com/technology/2014/12/autonomous-weapons-and-international-law-we-need-these-three-treaties-to-govern-killer-robots.html[49] Fariborz Zelli. 2011. The fragmentation of the global climate governance archi-tecture.
Wiley Interdisciplinary Reviews: Climate Change
2, 2 (2011), 255–270.https://doi.org/10.1002/wcc.104[50] Fariborz Zelli and Harro Van Asselt. 2013. Introduction: The Institutional Frag-mentation of Global Environmental Governance: Causes, Consequences, andResponses.
Global Environmental Politics
13, 3 (2013), 1–13.
SUMMARY OF CONSIDERATIONS
Consideration Implications forCentralisation Historical Example AI Policy Issue Example
Political Power Pro
Shaping other regimes:
WTO has created achilling effect, where the fear of conflictingwith WTO norms and rules has led environ-mental treaties to self-censor to avoid address-ing trade-related measures. Empowered regime using foresight on AI sys-tems development can address policy prob-lems more quickly.Efficiency& Participation Pro
Decentralisation raises inefficiencies and barri-ers:
The proliferation of multilateral environ-mental agreements poses costs and barriers toparticipation in negotiation, implementation,and monitoring. AI companies engage and share expertise, butif not checked by adversarial civil society,there is a greater concern of regulatory cap-ture; increased costs undermine civil societyparticipation.Slowness& Brittleness Con
Slowness:
Under the GATT, 1947 tariff negotia-tions among 19 countries took 8 months. TheUruguay round, beginning in 1986, took 91months for 125 parties to agree on reductions.
Regulatory capture:
WHO accused of-for undue corporate influence in response to2009 H1N1 pandemic.
Pathology of path-dependence:
FailedILO reform attempts. Process of centralised regime can not keeppace with high speed of AI progress anddeployment, may miss the window ofopportunity.Advanced AI issues (especially HLMI) mayrapidly shift the risk landscape or problemportfolio of AI, beyond the narrow scope ofan older institutional mandateBreadth vs. DepthDilemma Con
Watering down:
Power predicts outcomes:
Intellectual property in trade shifted fromUNCTAD to WIPO to WTO, with developedcountries getting their way.
Accelerates progress:
NGOs and somestates shifted discussions of anti-personnelmines ban away from CCW, ultimatelyresulting in the Ottawa Treaty. Governance of military AI systems is frac-tured across CCW, multiple GGEs. This strat-egy may catalyze progress, but brings risksof fracture.PolicyCoordination Depends ondesign