AI Can Stop Mass Shootings, and More
Selmer Bringsjord, Naveen Sundar Govindarajulu, Michael Giancola
AAI Can Stop Mass Shootings, and More
Selmer Bringsjord • Naveen Sundar Govindarajulu • Michael Giancola
Rensselaer AI & Reasoning (RAIR) LabDepartment of Cognitive Science; Department of Computer ScienceRensselaer Polytechnic Institute (RPI); Troy NY 12180 [email protected] • [email protected] • [email protected] Abstract
We propose to build directly upon our longstanding, prior r&din AI/machine ethics in order to attempt to make real the blue-sky idea of AI that can thwart mass shootings, by bringing tobear its ethical reasoning. The r&d in question is overtly andavowedly logicist in form, and since we are hardly the onlyones who have established a firm foundation in the attemptto imbue AI’s with their own ethical sensibility, the pursuitof our proposal by those in different methodological campsshould, we believe, be considered as well. We seek herein tomake our vision at least somewhat concrete by anchoring ourexposition to two simulations, one in which the AI saves thelives of innocents by locking out a malevolent human’s gun,and a second in which this malevolent agent is allowed bythe AI to be neutralized by law enforcement. Along the way,some objections are anticipated, and rebutted.
Introduction
No one reading this sentence is unaware of tragic massshootings in the past. Can future carnage of this kind beforestalled? If so, how? Many politicians of all stripes confi-dently answer the first question in the affirmative, but unfor-tunately then a cacophony of competing answers to the sec-ond quickly ensues. We too are optimistic about tomorrow,but the rationale we offer for our sanguinity has nothing todo with debates about background checks and banning par-ticular high-powered weapons or magazines, nor with a hopethat the evil and/or insane in our species can somehow beput in a kind of perpetual non-kinetic quarantine, separatedfrom firearms. While we hope that such measures, which oflate have thankfully been gaining some traction, will be putin place, our optimism is instead rooted in AI; specifically,in ethically correct
AI; and even more specifically still: ourhope is in ethically correct AI that guards guns. Unless AI isharnessed in the manner we recommend, it seems inevitablethat politicians (at least in the U.S.) will continue to battleeach other, and it does not strike us as irrational to hold thateven if some legislation emerges from their debates, whichof late seems more likely, it will not prevent what can alsobe seen as a source of the problem in many cases: namely,that guns themselves have no ethical compass.
What Could Have Been
A rather depressing fact about the human condition is thatany number of real-life tragedies in the past could be citedin order to make our point regarding what could have beeninstead; that is, there have been many avoidable mass shoot-ings, in which a human deploys one or more guns that areneither intelligent nor ethically correct, and innocents die orare maimed. Without loss of generality, we ask the readerto recall the recent El Paso shooting in Texas. If the kindof AI we seek had been in place, history would have beenvery different in this case. To grasp this, let’s turn back theclock. The shooter is driving to Walmart, an assault rifle,and a massive amount of ammunition, in his vehicle. TheAI we envisage knows that this weapon is there, and that itcan be used only for very specific purposes, in very specificenvironments (and of course it knows what those purposesand environments are). At Walmart itself, in the parking lot,any attempt on the part of the would-be assailant to use hisweapon, or even position it for use in any way, will result init being locked out by the AI. In the particular case at hand,the AI knows that killing anyone with the gun, except per-haps e.g. for self-defense purposes, is unethical. Since theAI rules out self-defense, the gun is rendered useless, andlocked out. This is depicted pictorially in Figure 1.Continuing with what could have been: Texas Rangerswere earlier notified by AI, and now arrive on the scene.If the malevolent human persists in an attempt to kill/maimdespite the neutralization of his rifle, say be resorting to aknife, the Rangers are ethically cleared to shoot in order tosave lives: their guns, while also guarded by AI that makessure firing them is ethically permissible, are fully operativebecause the Doctrine of Double Effect (or a variant; thesedoctrines are discussed below) says that it’s ethically per-missible to save the lives of innocent bystanders by killingthe criminal. They do so, and the situation is secure; see theillustration in Figure 2. Unfortunately, what we have just de-scribed is an alternate timeline that did not happen — but inthe future, in similar situations, we believe it could, and weurge people to at least contemplate whether we are right, andwhether, if we are, such AI is worth seeking. a r X i v : . [ c s . C Y ] F e b an This Blue-Sky AI Really be Engineered? Predictably, some will object as follows: “The concept youintroduce is attractive. But unfortunately it’s nothing morethan a dream; actually, nothing more than a pipe dream.Is this AI really feasible, science- and engineering-wise?”We answer in the affirmative, confidently. The overarch-ing reason for our optimism is that for well over 15 yearsBringsjord and colleagues have been developing logicist AItechnology to install in artificial agents so as to ensure thatthese agents are ethically correct [e.g. (Bringsjord, Ark-oudas, and Bello 2006; Arkoudas, Bringsjord, and Bello2005; Bringsjord and Taylor 2012; Bello and Bringsjord2013; Govindarajulu and Bringsjord 2017)]. This researchprogram has reached a higher degree of maturity during aphase over the past six years, during which the second au-thor, Govindarajulu, has collaborated with Bringsjord, andled on many fronts, including not only papers that seekto formalize and implement ethical theories in AIs [e.g.(Govindarajulu and Bringsjord 2017; Govindarajulu et al.2019)], but also in the development of high-powered auto-mated reasoning technology ideal for machine ethics; for in-stance the automated reasoner ShadowProver (Govindara-julu 2016; Govindarajulu, Bringsjord, and Peveler 2019),and the planner Spectra (Govindarajulu 2017), which is it-self built up from automated reasoning.Importantly, while all of the longstanding work pointedto in the previous paragraph is logicist, and thus in linewith arguments in favor of such AI [e.g. (Bringsjord 2008;Bringsjord et al. 2018)], we wish to point out that other workdesigned to imbue AIs with their own ethical reasoning anddecision-making capacity is of a type that in our judgmentfits well our logicist orientation [e.g. (Arkin 2009; Pereiraand Saptawijaya 2016a)], and with our blue-sky vision. Butbeyond this, since of course lives are at stake, we call foran ecumenical outlook; hence if statistical/connectionist MLcan somehow be integrated with transparent, rigorous ethi-cal theories, codes, and principles [and in fact some guid-ance for those who might wish to do just this is providedin (Govindarajulu and Bringsjord 2017)] that can serve asa verifiable, surveyable basis for locking out weapons, wewould be thrilled.
Why is Killing Wrong?
As professional ethicists know, it’s rather challenging to saywhy it’s wrong to kill people, especially if one is attempt-ing to answer this question on the basis of any consequen-tialist ethical theory (e.g. utilitarianism); a classic, cogentstatement of the problem is provided in (Ewin 1972). Weare inclined to affirm the general answer to the first questionin the present section’s title that runs like this: “To kill a hu-man person h is ipso facto to cut off any chance that h canreach any of the future goals that h has. This is what makeskilling an innocent person intrinsically wrong.” This answer,formalized, undergirds the first of our two simulations. Automating the Doctrine of Double Effect
We referred above to the Doctrine of Double Effect,
DDE for short. We now informally but rigorously present this ethical principle, so that the present short paper is self-contained. Our presentation presupposes that we possess anethical hierarchy that classifies actions (e.g. as forbidden , morally neutral , obligatory ); see (Bringsjord 2015). We fur-ther assume that we have a utility or goodness function forstates of the world or effects; this assumption is roughly inline with a part of all consequentialist ethical theories (e.g.utilitarianism). For an autonomous agent a , an action α in asituation σ at time t is said to be DDE -compliant iff : C the action is not forbidden (where we assume an ethicalhierarchy such as the one given by Bringsjord (2015), andrequire that the action be neutral or above neutral in sucha hierarchy); C the net utility or goodness of the action is greater than somepositive amount γ ; C a the agent performing the action intends only the good ef-fects; C b the agent does not intend any of the bad effects; C the bad effects are not used as a means to obtain the goodeffects; and C if there are bad effects, the agent would rather the situationbe different and the agent not have to perform the action.That is, the action is unavoidable. See Clause 6 of Principle III in (Khatchadourian 1988)for a justification of clause C . Most importantly, notethat
DDE has long been taken as the ethical basis for self-defense, and just war (McIntyre 20042014). Our work bringsthis tradition, which has been informal, into the realm of for-mal methods, and our second simulation is based upon an AIproving that
DDE holds.
Two Simulations
A pair of simulations, each confessedly simple, nonethelesslend credence to our claim that our blue-sky conception isfeasible. In the first, an AI blocks the pivotal human action α because the action is (given, of course, a background ethi-cal theory that is presumed) ethically impermissible. Essen-tially, the AI is able to prove O ( a, ¬ α ) by using a principleof the form Φ → O ( a, ¬ α ) . Here Φ says that performanceof α by a would deprive an innocent person a (cid:48) of the abilityto continue to pursue, after this deprivation, any of his/hergoals. Once the AI, powered by ShadowProver, proves that α is ethically impermissible for a , an inability to prove by DDE that there is an “override” entails in this simulationthat the pivotal action cannot be performed by the human.In the second simulation, the AI allows a human action by This clause has not been discussed in any prior rigorous treat-ments of
DDE , but we feel C captures an important part of DDE as it is normally used, e.g. in unavoidable ethically thorny situationsone would rather not be present in. C is necessary, as the conditionis subjunctive/counterfactual in nature and hence may not alwaysfollow from C − C , since there is no subjunctive content in thoseconditions. Note that while (Pereira and Saptawijaya 2016b) model DDE using counterfactuals, they use counterfactuals to model C rather than C . That said, the formalization of C is quite diffi-cult, requiring the use of computationally hard counterfactual andsubjunctive reasoning. We leave this aside here, reserved for futurework. DE that directly kills one (the malevolent shooter) to savefour human members of law enforcement (see Fig. 1). Herenow is a brutally brief look on the more technical side of thesimulations in question.As discussed earlier, it is difficult to state exactly why it’sintrinsically wrong to kill people. Yet we must do exactlythis if we are to enable a machine to generate a proof (oreven just a cogent argument) that the assailant’s gun should,on ethical grounds, be locked. Moreover, we must state thisas formulae expressed in a formal logic that an automatedtheorem prover can reason over. In our case, we utilize theDeontic Cognitive Event Calculus ( DCEC ) and the afore-mentioned ShadowProver, respectively. Much has been writ-ten elsewhere about
DCEC and the class of calculi that sub-sumes it; these details are out of scope here, and we directinterested readers to (Govindarajulu and Bringsjord 2017),which makes a nice starting place for those in AI. The orig-inal cognitive calculus appeared long ago, in (Arkoudas andBringsjord 2009); but this calculus had no ethical dimen-sion in the form of deontic operators, and pre-dated Shad-owProver [and used Athena instead, a still-vibrant systemthat anchors the recent (Arkoudas and Musser 2017)]. Hereit should be sufficient to say only that dialects of
DCEC havebeen used to formalize and automate highly intensional rea-soning processes, such as the false-belief task (Arkoudas andBringsjord 2009) and akrasia (succumbing to temptation toviolate moral principles) (Bringsjord et al. 2014).
DCEC isa sorted (i.e. typed) quantified multi-operator modal logic.The calculus has a well-defined syntax and proof calcu-lus; the latter is based on natural deduction (Gentzen 1935),and includes all the introduction and elimination rules forsecond-order logic, as well as inference schemata for themodal operators and related structures. The modal operatorsin
DCEC include the standard ones for knowledge K , belief B , desire D , intention I , and in some dialects operators forperception and communication as well. The general formatof an intensional operator is e.g. K ( a, t, φ ) , which says thatagent a knows at time t the proposition φ . Here φ can in turnbe any arbitrary formula.As to the pair of simulations themselves, while a full dis-cussion of them would not fit within the limitations of thisshort paper, we do discuss one critical definition next, thatof the (abstracted) predicate Prev ( x, y, g, a, t ) , which meansthat x prevents y from achieving goal g via action a at time t ; in a form expressed in DCEC syntax: ∃ t , t Moment ∧ prior ( t, t ) , prior ( t , t ) , K (cid:16) x, t, D ( y, t, Holds ( g, t )) ∧ I ( y, t, happens ( g, t )) (cid:17) , K x, t, ∃ a (cid:48) : ActionType I ( y, t , happens ( action ( y, a (cid:48) ) , t )) ∧ happens ( action ( y, a (cid:48) ) , t ) ∧¬ Block ( x, y, g, a, t ) → happens ( g, t ) , K (cid:16) x, t, happens ( action ( x, a ) , t ) → Block ( x, y, g, a, t ) (cid:17) , happens ( action ( x, a ) , t ) The key components in this definition are:1. x knows that y desires a goal g and intends to ac-complish g ;2. x knows that y intends to perform an action a (cid:48) thatwill lead to the accomplishment of y ’s goal g , unless x does something to block that goal;3. x knows that if x performs action a then y ’s goal g will be blocked; and4. x performs a .Utilizing this definition, along with a few other formu-lae in DCEC (chiefly, that preventing another human fromachieving their goals, unless overridden by
DDE , is forbid-den), ShadowProver can prove — on an Apple laptop, andwithout any human-engineered optimization — for Simula-tion 1 that lock-out must happen in less than a second, and 3seconds for Simulation 2 that lock-out must not happen.
Goals, projects, dreams.
This is not allowed
Figure 1: Prohibition Against Killing in Force; AI ThwartsMalevolent Assailant.
This corresponds to Simulation 1.
This is allowed
Figure 2:
DDE
Sanctions Shooting Malevolent Assailant;AI Refrains from Thwarting.
This corresponds to Simulation2.
Why Not
Legally
Correct AIs Instead?
We expect some readers to sensibly ask why we don’t re-strict the AI we seek to legal correctness, instead of ethi-cal correctness. After all (as it will be said), the shootingsin question are illegal. The answer is that, one, much of ourwork on the deontic-logic side conforms to a framework thatLeibniz espoused, in which legal obligations are the “weak-est” kind of moral obligations/prohibitions, and come justefore, but connected to, ethical obligations in the hierar-chy
E H , first introduced in (Bringsjord 2015). In this Leib-nizian approach, there is no hard-and-fast breakage betweenlegal obligations/prohibitions and moral ones; the underly-ing logic is seamless across the two spheres. Hence, any andall of our formalisms and technology can be used directly ina “law-only” manner. This is in fact provably the case; somerelevant theorems appear in (Bringsjord 2015). The secondpart of our reply to the present objection is that we wish toensure that AIs can be ethically correct even in cases wherethe local laws are wildly divergent from standard Occidentalethical theories.
Additional Objections
Of course, there are any number of additional objections thatwill be raised against the research direction we seek to cat-alyze by the present short paper. It is fairly easy to anticipatemany of them, but current space constraints preclude pre-senting them, and then providing rebuttals. We rest contentwith a speedy treatment of but two objections, the first ofwhich is:“Consider the
Charlie Hebdo tragedy, in Paris. Here,high-powered rifles were legally purchased in Slovakia,modified, and then smuggled into France, where theywere then horribly unleashed upon innocent journalists.Even if the major gun manufacturers, like the major carmanufacturers, willingly subject themselves to the re-quirement that their products are infused with ethicallycorrect AI of the type you are engineering, surely therewill still be ‘outlaw’ manufacturers that elude any AIaboard their weapons.”In reply, we note that our blue-sky conception is in noway restricted to the idea that the guarding AI is only inthe weapons in question. Turn back the clock to the
Hebdo tragedy, and assume for the sake of argument that the broth-ers’ rifles in question are devoid of any overseeing AI of thetype present in the two simulations described above. It stillremains true, for example, that the terrorists in this case musttravel to Rue Nicolas-Appert with their weapons, and therewould in general be any number of options available to AIsthat perceive the brothers in transit with their illegal cargo tothwart such transit. Ethically correct AI, with the power toguard human life on the basis of suitable ethical theory/ies,ethical codes, and legal theory/ies/codes, deployed in andacross a sensor-rich city like Paris, would have any numberof actions available to it by which a violent future can beavoided in favor of life. Whether guarding AI is in weaponsor outside them looking on, certain core requirements mustbe met in order to ensure efficacy. For instance, here are two(put roughly) things that a guarding AI should be able tocome to know/believe:
Epistemic Requirements for Weapon-Guarding AI
Given any human h , at any point of time t , an ethicallycorrect, overseeing AI should at least be able to come toknow/believe the following, in order to verify that relevantactions on the part of h are DDE -compliant (where φ is a state-of-affairs that includes use of a weapon).1. The human’s intentions: ( ¬ ) I ( h, t, φ )
2. Forbiddenness/Permissibility: ( ¬ ) O ( a, t, σ, ¬ φ ) Now here is the second objection:“Your hope for AI will be dashed by the brute fact thatAI in weapons can be discarded by hackers.”This is an objection that we have long anticipated in ourwork devoted to installing ethical controls in such things asrobots, and we see no reason why our approach there, whichis to bring machine ethics down to an immutable hardwarelevel (Govindarajulu and Bringsjord 2015; Govindarajulu etal. 2018), cannot be pursued for weapons as well. Of course,a longer discussion of the very real challenge here is needed.
Concluding Remarks
Alert readers may ask why the “, And More” appears in ourtitle. The phrase is there because machine ethics, once oneis willing to look to AI itself for moral correctness, and pro-tective actions flowing therefrom, can be infused in otherartifacts the full “AI-absent” human control of which oftenresults in carnage. A classic example is driving. We all knowthat AI has made amazing strides in self-driving vehicles,but there is no need to wait for lives to be saved by broadimplementation of self-driving AI: ethically correct AI, to-day, can shut down a car if the would-be human driver isperceived by an artificial agent to be intoxicated (above, say,.08 BAC). In 2017 alone, over 10,000 people died in theU.S. because of intoxicated human drivers used their vehi-cles immorally/illegally (NHTSA ). Ethically correct AI, in-deed relatively such AI, can stop this, today.We end with a simple observation, and from it a singlequestion: Many researchers are already working on the chal-lenge of bringing ethically correct AIs to the world. Why notchannel some of this ingenious work specifically into the en-gineering of AIs that are employed to guard artifacts that,indisputably, are all too often vehicles for unethical agentsof the human sort to cause horrible harm? Acknowledgments
The authors are indebted to ONR for a 6-year MURI grantdevoted to the science and engineering of morally competentAI/robots (M. Scheutz PI, Co-PIs S. Bringsjord & B. Malle,N.S. Govindarajulu Senior Research Scientist), focused inour case on the use of multi-operator quantified modal logicsto specify and implement such competence; and to AFOSR(S. Bringsjord PI) for support that continues to enable the in-vention and implementation of unprecedentedly expressivecomputational logics and automated reasoners that in turnenable human-level computational intelligence. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812630 eferences [Arkin 2009] Arkin, R. 2009. Governing Lethal Behavior inAutonomous Robots . CRC Press.[Arkoudas and Bringsjord 2009] Arkoudas, K., andBringsjord, S. 2009. Propositional Attitudes and Causa-tion.
International Journal of Software and Informatics
Fundamental Proof Methods in Computer Science: AComputer-Based Approach . Cambridge, MA: MIT Press.[Arkoudas, Bringsjord, and Bello 2005] Arkoudas, K.;Bringsjord, S.; and Bello, P. 2005. Toward Ethical Robotsvia Mechanized Deontic Logic. In
Machine Ethics: Papersfrom the AAAI Fall Symposium; FS–05–06 . Menlo Park,CA: American Association for Artificial Intelligence.17–23.[Bello and Bringsjord 2013] Bello, P., and Bringsjord, S.2013. On How to Build a Moral Machine.
Topoi
Robot Ethics: TheEthical and Social Implications of Robotics . Cambridge,MA: MIT Press. 85–108.[Bringsjord, Arkoudas, and Bello 2006] Bringsjord, S.; Ark-oudas, K.; and Bello, P. 2006. Toward a General Logi-cist Methodology for Engineering Ethically Correct Robots.
IEEE Intelligent Systems
Proceedings of ETHICS • , 22–29. IEEE Catalog Number:CFP14ETI-POD. Papers from the Proceedings can be down-loaded from IEEE at URL provided here.[Bringsjord et al. 2018] Bringsjord, S.; Govindarajulu, N.;Banerjee, S.; and Hummel, J. 2018. Do Machine-LearningMachines Learn? In M¨uller, V., ed.,
Philosophy and Theoryof Artificial Intelligence 2017 . Berlin, Germany: SpringerSAPERE. 136–157. This book is Vol. 44 in the book se-ries. The paper answers the question that is its title with aresounding No. A preprint of the paper can be found via theURL given here.[Bringsjord 2008] Bringsjord, S. 2008. The Logicist Man-ifesto: At Long Last Let Logic-Based AI Become a FieldUnto Itself.
Journal of Applied Logic EH . In Ferreira,I.; Sequeira, J.; Tokhi, M.; Kadar, E.; and Virk, G., eds., A World With Robots: International Conference on RobotEthics (ICRE 2015) . Berlin, Germany: Springer. 47–61.This paper was published in the compilation of ICRE 2015papers, distributed at the location of ICRE 2015, where thepaper was presented: Lisbon, Portugal. The URL given heregoes to the preprint of the paper, which is shorter than thefull Springer version. [Ewin 1972] Ewin, R. E. 1972. What is Wrong with KillingPeople?
The Philosophical Quarterly
The Collected Papersof Gerhard Gentzen . Amsterdam, The Netherlands: North-Holland. 68–131. This is an English version of the well-known 1935 German version.[Govindarajulu and Bringsjord 2015] Govindarajulu, N. S.,and Bringsjord, S. 2015. Ethical Regulation of Robots Mustbe Embedded in Their Operating Systems. In Trappl, R., ed.,
A Construction Manual for Robots’ Ethical Systems: Re-quirements, Methods, Implementations . Basel, Switzerland:Springer. 85–100.[Govindarajulu and Bringsjord 2017] Govindarajulu, N.,and Bringsjord, S. 2017. On Automating the Doctrineof Double Effect. In Sierra, C., ed.,
Proceedings of theTwenty-Sixth International Joint Conference on ArtificialIntelligence (IJCAI-17) , 4722–4730. International JointConferences on Artificial Intelligence.[Govindarajulu et al. 2018] Govindarajulu, N.; Bringsjord,S.; Sen, A.; Paquin, J.; and O’Neill, K. 2018. Ethical Oper-ating Systems. In De Mol, L., and Primiero, G., eds.,
Reflec-tions on Programming Systems , volume 133 of
Philosophi-cal Studies . Springer. 235–260.[Govindarajulu et al. 2019] Govindarajulu, N. S.;Bringsjord, S.; Ghosh, R.; and Sarathy, V. 2019. To-ward the Engineering of Virtuous Machines. In Conitzer,V.; Hadfield, G.; and Vallor, S., eds.,
Proceedings of the2019 AAAI/ACM Conference on AI, Ethics, and Society(AIES 2019) , 29–35. New York, NY: ACM.[Govindarajulu, Bringsjord, and Peveler 2019]Govindarajulu, N. S.; Bringsjord, S.; and Peveler, M.2019. On quantified modal theorem proving for modelingethics. In Suda, M., and Winkler, S., eds.,
Proceedings of theSecond International Workshop on Automated Reasoning:Challenges, Applications, Directions, Exemplary Achieve-ments, [email protected] 2019, Natal, Brazil, August 26,2019 , volume 311 of
EPTCS , 43–49.[Govindarajulu 2016] Govindarajulu, N. S. 2016. Shadow-Prover.[Govindarajulu 2017] Govindarajulu, N. S. 2017. Spectra.[Khatchadourian 1988] Khatchadourian, H. 1988. Is thePrinciple of Double Effect Morally Acceptable?
Interna-tional Philosophical Quarterly
The Stanford Ency-clopedia of Philosophy .[Pereira and Saptawijaya 2016a] Pereira, L., and Saptawi-jaya, A. 2016a.
Programming Machine Ethics . Berlin, Ger-many: Springer. This book is in Springer’s SAPERE series.[Pereira and Saptawijaya 2016b] Pereira, L. M., and Saptaw-ijaya, A. 2016b. Counterfactuals, Logic Programming andAgent Morality. In Rahman, S., and Redmond, J., eds.,