AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration
aa r X i v : . [ c s . OH ] M a y AGI and the Knight-Darwin Law: why idealizedAGI reproduction requires collaboration
Samuel Allen Alexander − − − X ] The U.S. Securities and Exchange Commission [email protected]://philpeople.org/profiles/samuel-alexander/publications
Abstract.
Can an AGI create a more intelligent AGI? Under idealizedassumptions, for a certain theoretical type of intelligence, our answeris: “Not without outside help”. This is a paper on the mathematicalstructure of AGI populations when parent AGIs create child AGIs. Weargue that such populations satisfy a certain biological law. Motivatedby observations of sexual reproduction in seemingly-asexual species, theKnight-Darwin Law states that it is impossible for one organism to asex-ually produce another, which asexually produces another, and so on for-ever: that any sequence of organisms (each one a child of the previous)must contain occasional multi-parent organisms, or must terminate. Byproving that a certain measure (arguably an intelligence measure) de-creases when an idealized parent AGI single-handedly creates a childAGI, we argue that a similar Law holds for AGIs.
Keywords:
Intelligence Measurement · Knight-Darwin Law · OrdinalNotations · Intelligence Explosion
It is difficult to reason about agents with Artificial General Intelligence (AGIs)programming AGIs . To get our hands on something solid, we have attemptedto find structures that abstractly capture the core essence of AGIs programmingAGIs. This led us to discover what we call the Intuitive Ordinal Notation System (presented in Section 2), an ordinal notation system that gets directly at theheart of AGIs creating AGIs. Our approach to AGI is what Goertzel [11] describes as the Universalist Approach:we consider “...an idealized case of AGI, similar to assumptions like the frictionlessplane in physics”, with the hope that by understanding this “simplified special case,we can use the understanding we’ve gained to address more realistic cases.” S. A. Alexander
We call an AGI truthful if the things it knows are true . In [4], we arguedthat if a truthful AGI X creates (without external help) a truthful AGI Y insuch a way that X knows the truthfulness of Y , then X must be more intelligentthan Y in a certain formal sense. The argument is based on the key assumptionthat if X creates Y , without external help, then X necessarily knows Y ’s sourcecode.Iterating the above argument, suppose X , X , . . . are truthful AGIs such thateach X i creates, and knows the truthfulness and the code of, X i +1 . Assumingthe previous paragraph, X would be more intelligent than X , which would bemore intelligent than X , and so on (in our certain formal sense). In Section 3we will argue that this implies it is impossible for such a list X , X , . . . to goon forever: it would have to stop after finitely many elements .At first glance, the above results might seem to suggest skepticism regardingthe singularity—regarding what Hutter [15] calls intelligence explosion , the ideaof AGIs creating better AGIs, which create even better AGIs, and so on. Butthere is a loophole (discussed further in Section 4). Suppose AGIs X and X ′ col-laborate to create Y . Suppose X does part of the programming work, but keepsthe code secret from X ′ , and suppose X ′ does another part of the programmingwork, but keeps the code secret from X . Then neither X nor X ′ knows Y ’s fullsource code, and yet if X and X ′ trust each other, then both X and X ′ shouldbe able to trust Y , so the above-mentioned argument breaks down.Darwin and his contemporaries observed that even seemingly asexual plantspecies occasionally reproduce sexually. For example, a plant in which pollen isordinarily isolated, might release pollen into the air if a storm damages the partof the plant that would otherwise shield the pollen . The Knight-Darwin Law[8], named after Charles Darwin and Andrew Knight, is the principle (rephrasedin modern language) that there cannot be an infinite sequence X , X , . . . ofbiological organisms such that each X i asexually parents X i +1 . In other words,if X , X , . . . is any infinite list of organisms such that each X i is a biologicalparent of X i +1 , then some of the X i would need to be multi-parent organisms.The reader will immediately notice a striking parallel between this principle andthe discussion in the previous two paragraphs.In Section 2 we present the Intuitive Ordinal Notation System. Knowledge and truth are formally treated in [4] but here we aim at a more generalaudience. For the purposes of this paper, an AGI can be thought of as knowing afact if and only if the AGI would list that fact if commanded to spend eternity listingall the facts that it knows. We assume such knowledge is closed under deduction, anassumption which is ubiquitous in modal logic, where it often appears in a form like K ( φ → ψ ) → ( K ( φ ) → K ( ψ )). Of course, it is only in the idealized context of thispaper that one should assume AGIs satisfy such closure. This may initially seem to contradict some mathematical constructions [18] [22] ofinfinite descending chains of theories. But those constructions only work for weakerlanguages, making them inapplicable to AGIs which comprehend linguistically strongsecond-order predicates. Even prokaryotes can be considered to occasionally have multiple parents, if lateralgene transfer is taken into account.GI and the Knight-Darwin Law 3
In Section 3 we argue that if truthful AGI X creates truthful AGI Y , suchthat X knows the code and truthfulness of Y , then, in a certain formal sense, Y is less intelligent than X .In Section 4 we adapt the Knight-Darwin Law from biology to AGI andspeculate about what it might mean for AGI.In Section 5 we address some anticipated objections.Sections 2–3 are not new (except for new motivation and discussion). Theircontent appeared in [4], and was more rigorously formalized there. Sections 4–5contain this paper’s new material. Of this, some was hinted at in [4], and someappeared (weaker and less approachably) in the author’s dissertation [2]. If humans can write AGIs, and AGIs are at least as smart as humans, then AGIsshould be capable of writing AGIs. Based on the conviction that an AGI shouldbe capable of writing AGIs, we would like to come up with a more concretestructure, easier to reason about, which we can use to better understand AGIs.To capture the essence of an AGI’s AGI-programming capability, one mighttry: “computer program that prints computer programs.” But this only capturesthe AGI’s capability to write computer programs , not to write
AGIs .How about: “computer program that prints computer programs that printcomputer programs”? This second attempt seems to capture an AGI’s ability towrite program-writing programs , not to write
AGIs .Likewise, “computer program that prints computer programs that print com-puter programs that print computer programs” captures the ability to write program-writing-program-writing programs , not
AGIs .We need to short-circuit the above process. We need to come up with a notionX which is equivalent to “computer program that prints members of X”.
Definition 1 (See the following examples) We define the Intuitive Ordinal No-tations to be the smallest set P of computer programs such that: – Each computer program p is in P iff all of p ’s outputs are also in P . Example 2 (Some simple examples)1. Let P be “End”, a program which immediately stops without any outputs.Vacuously, all of P ’s outputs are in P (there are no such outputs). So P is an Intuitive Ordinal Notation.2. Let P be “Print(‘End’)”, a program which outputs “End” and then stops.By (1), all of P ’s outputs are Intuitive Ordinal Notations, therefore, so is P .3. Let P be “Print(‘Print(‘End’)’)”, which outputs “Print(‘End’)” and thenstops. By (2), all of P ’s outputs are Intuitive Ordinal Notations, therefore,so is P . This argument appeared in a fully rigorous form in [4], but in this paper we attemptto make it more approachable. S. A. Alexander
Example 3 (A more interesting example) Let P ω be the program: Let X = ‘End’; While(True) { Print(X); X = “Print(‘” + X + “’)”; } When executed, P ω outputs “End”, “Print(‘End’)”, “Print(‘Print(‘End’)’)”, andso on forever. As in Example 2, all of these are Intuitive Ordinal Notations.Therefore, P ω is an Intuitive Ordinal Notation. To make Definition 1 fully rigorous, one would need to work in a formalmodel of computation; see [4] (Section 3) where we do exactly that. Examples 2and 3 are reminiscent of Franz’s approach of “head[ing] for general algorithmsat low complexity levels and fill[ing] the task cup from the bottom up” [9]. Fora much larger collection of examples, see [3]. A different type of example will besketched in the proof of Theorem 7 below.
Definition 4
For any Intuitive Ordinal Notation x , we define an ordinal | x | inductively as follows: | x | is the smallest ordinal α such that α > | y | for everyoutput y of x . Example 5 –
Since P (from Example 2) has no outputs, it follows that | P | = 0 , the smallest ordinal. – Likewise, | P | = 1 and | P | = 2 . – Likewise, P ω (from Example 3) has outputs notating , , , . . . —all the finitenatural numbers. It follows that | P ω | = ω , the smallest infinite ordinal. – Let P ω +1 be the program “Print( P ω )”, where P ω is as in Example 3. It followsthat | P ω +1 | = ω + 1 , the next ordinal after ω . The Intuitive Ordinal Notation System is a more intuitive simplification ofan ordinal notation system known as Kleene’s O . Whatever an AGI is, an AGI should know certain mathematical facts. The fol-lowing is a universal notion of an AGI’s intelligence based solely on said facts.In [4] we argue that this notion captures key components of intelligence such aspattern recognition, creativity, and the ability or generalize. We will give furtherjustification in Section 5. Even if the reader refuses to accept this as a genuineintelligence measure, that is merely a name we have chosen for it: we could giveit any other name without compromising this paper’s structural results.
Definition 6
The
Intuitive Ordinal Intelligence of a truthful AGI X is thesmallest ordinal | X | such that | X | > | p | for every Intuitive Ordinal Notation p such that X knows that p is an Intuitive Ordinal Notation. The following theorem provides a relationship between Intuitive OrdinalIntelligence and AGI creation of AGI. Here, we give an informal version of theproof; for a version spelled out in complete formal detail, see [4]. Possibly formalizing a relationship implied offhandedly by Chaitin, who suggestsordinal computation as a mathematical challenge intended to encourage evolution,“and the larger the ordinal, the fitter the organism” [7].GI and the Knight-Darwin Law 5
Theorem 7
Suppose X is a truthful AGI, and X creates a truthful AGI Y insuch a way that X knows Y ’s code and truthfulness. Then | X | > | Y | .Proof. Suppose Y were commanded to spend eternity enumerating the biggestIntuitive Ordinal Notations Y could think of. This would result in some list L of Intuitive Ordinal Notations enumerated by Y . Since Y is an AGI, L must becomputable. Thus, there is some computer program P whose outputs are exactly L . Since X knows Y ’s code, and as an AGI, X is capable of reasoning aboutcode, it follows that X can infer a program P that lists L . Having constructed P this way, X knows: “ P outputs L , the list of things Y would output if Y were commanded to spend eternity trying to enumerate large Intuitive OrdinalNotations”. Since X knows Y is truthful, X knows that L contains nothingexcept Intuitive Ordinal Notations, thus X knows that P ’s outputs are IntuitiveOrdinal Notations, and so X knows that P is an Intuitive Ordinal Notation. So | X | > | P | . But | P | is the least ordinal > | Q | for all Q output by L , in otherwords, | P | = | Y | . ⊓⊔ Theorem 7 is mainly intended for the situation where parent X creates in-dependent child Y , but can also be applied in case X self-modifies, viewing theoriginal X as being replaced by the new self-modified Y (assuming X has priorknowledge of the code and truthfulness of the modified result).It would be straightforward to extend Theorem 7 to cases where X creates Y non-deterministically. Suppose X creates Y using random numbers, such that X knows Y is one of Y , Y , . . . , Y k but X does not know which. If X knows that Y is truthful, then X must know that each Y i is truthful (otherwise, if some Y i were not truthful, X could not rule out that Y was that non-truthful Y i ). So byTheorem 7, each | Y i | would be < | X | . Since Y is one of the Y i , we would stillhave | Y | < | X | . “...it is a general law of nature that no organic being self-fertilises itselffor a perpetuity of generations; but that a cross with another individ-ual is occasionally—perhaps at long intervals of time—indispensable.”(Charles Darwin)In his Origin of Species, Darwin devotes many pages to the above-quotedprinciple, later called the Knight-Darwin Law [8]. In [1] we translate the Knight-Darwin Law into mathematical language. For example, X could write a general program Sim ( c ) that simulates an input AGI c waking up in an empty room and being commanded to spend eternity enumerat-ing Intuitive Ordinal Notations. This program Sim ( c ) would then output whateveroutputs AGI c outputs under those circumstances. Having written Sim ( c ), X couldthen obtain P by pasting Y ’s code into Sim (a string operation—not actually run-ning
Sim on Y ’s code). Nowhere in this process do we require X to actually execute Sim (which might be computationally infeasible). S. A. Alexander
Principle 8 (The Knight-Darwin Law) There cannot be an infinite sequence x , x , . . . of organisms such that each x i is the lone biological parent of x i +1 . Ifeach x i is a parent of x i +1 , then some x i +1 must have multiple parents. A key fact about the ordinals is they are well-founded : there is no infinitesequence o , o , . . . of ordinals such that each o i > o i +1 . In Theorem 7 weshowed that if truthful AGI X creates truthful AGI Y in such a way as toknow the truthfulness and code of Y , then X has a higher Intuitive OrdinalIntelligence than Y . Combining this with the well-foundedness of the ordinalsyields a theorem extremely similar to the Knight-Darwin Law. Theorem 9 (The Knight-Darwin Law for AGIs) There cannot be an infinitesequence X , X , . . . of truthful AGIs such that each X i creates X i +1 in sucha way as to know X i +1 ’s truthfulness and code. If each X i creates X i +1 so asto know X i +1 is truthful, then occasionally certain X i +1 ’s must be co-createdby multiple creators (assuming that creation by a lone creator implies the lonecreator would know X i +1 ’s code).Proof. By Theorem 7, the Intuitive Ordinal Intelligence of X , X , . . . would bean infinite strictly-descending sequence of ordinals, violating the well-foundednessof the ordinals. ⊓⊔ It is perfectly consistent with Theorem 7 that Y might operate faster than X , performing better in realtime environments (as in [10]). It may even be that Y performs so much faster that it would be infeasible for X to use the knowledgeof Y ’s code to simulate Y . Theorems 7 and 9 are profound because they suggestthat descendants might initially appear more practical (faster, better at problem-solving, etc.), yet, without outside help, their knowledge must degenerate. Thisparallels the hydra game of Kirby and Paris [16], where a hydra seems to growas the player cuts off its heads, yet inevitably dies if the player keeps cutting.If AGI Y has distinct parents X and X ′ , neither of which fully knows Y ’scode, then Theorem 7 does not apply to X, Y or X ′ , Y and does not force | Y | < | X | or | Y | < | X ′ | . This does not necessarily mean that | Y | can be arbitrarilylarge, though. If X and X ′ were themselves created single-handedly by a loneparent X , similar reasoning to Theorem 7 would force | Y | < | X | (assuming X could infer the code and truthfulness of Y from those of X and X ′ ) .In the remainder of this section, we will non-rigorously speculate about threeimplications Theorem 9 might have for AGIs and for AGI research. This is essentially true by definition, unfortunately the formal definition of ordinalnumbers is outside the scope of this paper. This suggests possible generalizations of the Knight-Darwin Law such as “Therecannot be an infinite sequence x , x , . . . of biological organisms such that each x i isthe lone grandparent of x i +1 ,” and AGI versions of same. This also raises questionsabout the relationship between the set of AGIs initially created by humans and howintelligent the offspring of those initial AGIs can be. These questions go beyond thescope of this paper but perhaps they could be a fruitful area for future research.GI and the Knight-Darwin Law 7 If AGI ought to be capable of programming AGI, Theorem 9 suggests that afundamental aspect of AGI should be the ability to collaborate with other AGIsin the creation of new AGIs. This seems to suggest there should be no suchthing as a solipsistic
AGI , or at least, solipsistic AGIs would be limited intheir reproduction ability. For, if an AGI were solipsistic, it seems like it wouldbe difficult for this AGI to collaborate with other AGIs to create child AGIs. Toquote Hern´andez-Orallo et al: “The appearance of multi-agent systems is a signthat the future of machine intelligence will not be found in monolithic systemssolving tasks without other agents to compete or collaborate with” [12].More practically, Theorem 9 might suggest prioritizing research on multi-agent approaches to AGI, such as [6], [12], [14], [17], [19], [21], and similar work. Darwin used the Knight-Darwin Law as a foundation for a broader thesis thatthe survival of a species depends on the inter-breeding of many members. Byanalogy, if our goal is to create robust AGIs, perhaps we should focus on creatinga wide variety of AGIs, so that those AGIs can co-create more AGIs.On the other hand, if we want to reduce the danger of AGI getting outof control, perhaps we should limit
AGI variety. At the extreme end of thespectrum, if humankind were to limit itself to only creating one single AGI ,then Theorem 9 would constrain the extent to which that AGI could reproduce. If AGI collaboration is a fundamental requirement for AGI “populations” topropagate, it might someday be possible to view AGI through a genetic lens.For example, if AGIs X and X ′ co-create child Y , if X runs operating system O , and X ′ runs operating system O ′ , perhaps Y will somehow exhibit traces ofboth O and O ′ . In this section, we discuss some anticipated objections.
We do not claim that Definition 6 is the “one true measure” of intelligence.Maybe there is no such thing: maybe intelligence is inherently multi-dimensional. That is, an AGI which believes itself to be the only entity in the universe. Or to perfectly isolate different AGIs away from one another—see [25]. S. A. Alexander
Definition 6 measures a type of intelligence based on mathematical knowledge closed under logical deduction. An AGI could be good at problem-solving butpoor at ordinals. But the broad AGIs we are talking about in this paper should becapable (if properly instructed) of attempting any reasonable well-defined task,including that of notating ordinals. So Definition 6 does measure one aspect ofan AGI’s abilities. Perhaps a word like “mathematical-knowledge-level” wouldfit better: but that would not change the Knight-Darwin Law implications.Intelligence has core components like pattern-matching, creativity, and theability to generalize. We claim that these components are needed if one wantsto competitively name large ordinals. If p is an Intuitive Ordinal Notation ob-tained using certain facts and techniques, then any AGI who used those factsand techniques to construct p should also be able to iterate those same factsand techniques. Thus, to advance from p to a larger ordinal which not just any p -knowing AGI could obtain, must require the creative invention of somenew facts or techniques, and this invention requires some amount of creativity,pattern-matching, etc. This becomes clear if the reader tries to notate ordinalsqualitatively larger than Example 3; see the more extensive examples in [3].For analogy’s sake, imagine a ladder which different AGIs can climb, andsuppose advancing up the ladder requires exercising intelligence. One way tomeasure (or at least estimate) intelligence would be to measure how high anAGI can climb said ladder.Not all ladders are equally good. A ladder would be particularly poor if it hada top rung which many AGIs could reach: for then it would fail to distinguishbetween AGIs who could reach that top rung, even if one AGI reaches it withease and another with difficulty. Even if the ladder was infinite and had no toprung, it would still be suboptimal if there were AGIs capable of scaling the wholeladder (i.e., of ascending however high they like, on demand) . A good laddershould have, for each particular AGI, a rung which that AGI cannot reach.Definition 6 offers a good ladder. The rungs which an AGI manages to reach,we have argued, require core components of intelligence to reach. And no par-ticular AGI can scale the whole ladder , because no AGI can enumerate all Wang has correctly pointed out [23] that an AGI consists of much more than merelya knowledge-set of mathematical facts. Still, we feel mathematical knowledge is atleast one important aspect of an AGI’s intelligence. Hibbard’s intelligence measure [13] is an infinite ladder which is nevertheless shortenough that many AGIs can scale the whole ladder—the AGIs which do not “havefinite intelligence” in Hibbard’s words (see Hibbard’s Proposition 3). It should bepossible to use a fast-growing hierarchy [24] to transfinitely extend Hibbard’s ladderand reduce the set of whole-ladder-scalers. This would make Hibbard’s measurementordinal-valued (perhaps Hibbard intuited this; his abstract uses the word “ordinal”in its everyday sense as synonym for “natural number”). Thus, this ladder avoids a common problem that arises when trying to measuremachine intelligence using IQ tests, namely, that for any IQ test, an algorithm canbe designed to dominate that test, despite being otherwise unintelligent [5].GI and the Knight-Darwin Law 9 the Intuitive Ordinal Notations: it can be shown that they are not computablyenumerable . If a truthful AGI knows its own code, then it can certainly print a copy of itself.But if so, then it necessarily cannot know the truthfulness of that copy, lest itwould know the truthfulness of itself. Versions of G¨odel’s incompleteness theo-rems adapted [20] to mechanical knowing agents imply that a suitably idealizedtruthful AGI cannot know its own code and its own truthfulness.
The reader might object that Theorem 7 breaks down if Y is prohibitively ex-pensive for X to simulate. But Theorem 7 and its proof have nothing to do withsimulation. In functional languages like Haskell, functions can be manipulated,filtered, formally composed with other functions, and so on, without needing tobe executed. Likewise, if X knows the code of Y , then X can manipulate andreason about that code without executing a single line of it. The Intuitive Ordinal Intelligence of a truthful AGI is defined to be the supre-mum of the ordinals which have Intuitive Ordinal Notations the AGI knows tobe Intuitive Ordinal Notations. We argued that this notion measures (a type of)intelligence. We proved that if a truthful AGI single-handedly creates a childtruthful AGI, in such a way as to know the child’s truthfulness and code, thenthe parent must have greater Intuitive Ordinal Intelligent than the child. Thisallowed us to establish a structural property for AGI populations, resemblingthe Knight-Darwin Law from biology. We speculated about implications of thisbiology-AGI parallel. We hope by better understanding how AGIs create newAGIs, we can better understand methods of AGI-creation by humans.
Acknowledgments
We gratefully acknowledge Jordi Bieger, Thomas Forster, Jos´e Hern´andez-Orallo,Bill Hibbard, Mike Steel, Albert Visser, and the reviewers for discussion andfeedback. Namely, because if the set of Intuitive Ordinal Notations were computably enumer-able, the program p which enumerates them would itself be an Intuitive OrdinalNotation, which would force | p | > | p | .0 S. A. Alexander References
1. Alexander, S.A.: Infinite graphs in systematic biology, with an application to thespecies problem. Acta Biotheoretica , 181–201 (2013)2. Alexander, S.A.: The theory of several knowing machines. Ph.D. thesis, The OhioState University (2013)3. Alexander, S.A.: Intuitive ordinal notations (IONs). GitHub repository, https://github.com/semitrivial/ions (2019)4. Alexander, S.A.: Measuring the intelligence of an idealized mechanical knowingagent. In: CIFMA (2019)5. Besold, T., Hern´andez-Orallo, J., Schmid, U.: Can machine intelligence be mea-sured in the same way as human intelligence? KI-K¨unstliche Intelligenz , 291–297(2015)6. Castelfranchi, C.: Modelling social action for AI agents. AI , 157–182 (1998)7. Chaitin, G.: Metaphysics, metamathematics and metabiology. In: Hector, Z. (ed.)Randomness through computation. World Scientific (2011)8. Darwin, F.: The Knight-Darwin Law. Nature , 630–632 (1898)9. Franz, A.: Toward tractable universal induction through recursive program learn-ing. In: ICAGI. pp. 251–260 (2015)10. Gavane, V.: A measure of real-time intelligence. JAGI , 31–48 (2013)11. Goertzel, B.: Artificial general intelligence: concept, state of the art, and futureprospects. JAGI , 1–48 (2014)12. Hern´andez-Orallo, J., Dowe, D.L., Espa˜na-Cubillo, S., Hern´andez-Lloreda, M.V.,Insa-Cabrera, J.: On more realistic environment distributions for defining, evalu-ating and developing intelligence. In: ICAGI. pp. 82–91 (2011)13. Hibbard, B.: Measuring agent intelligence via hierarchies of environments. In:ICAGI. pp. 303–308 (2011)14. Hibbard, B.: Societies of intelligent agents. In: ICAGI. pp. 286–290 (2011)15. Hutter, M.: Can intelligence explode? JCS , 143–166 (2012)16. Kirby, L., Paris, J.: Accessible independence results for Peano arithmetic. Bulletinof the London Mathematical Society , 285–293 (1982)17. Kolonin, A., Goertzel, B., Duong, D., Ikle, M.: A reputation system for artificialsocieties. arXiv preprint arXiv:1806.07342 (2018)18. Kripke, S.A.: Ungroundedness in Tarskian languages. JPL , 603–609 (2019)19. Potyka, N., Acar, E., Thimm, M., Stuckenschmidt, H.: Group decision making viaprobabilistic belief merging. In: 25th IJCAI. AAAI Press (2016)20. Reinhardt, W.N.: Absolute versions of incompleteness theorems. Nous , 317–346(1985)21. Th´orisson, K.R., Benko, H., Abramov, D., Arnold, A., Maskey, S., Vaseekaran, A.:Constructionist design methodology for interactive intelligences. AI Magazine ,77–90 (2004)22. Visser, A.: Semantics and the liar paradox. In: Handbook of philosophical logic,pp. 149–240. Springer (2002)23. Wang, P.: Three fundamental misconceptions of artificial intelligence. Journal ofExperimental & Theoretical Artificial Intelligence , 249–268 (2007)24. Weiermann, A.: Slow versus fast growing. Synthese , 13–29 (2002)25. Yampolskiy, R.V.: Leakproofing singularity-artificial intelligence confinement prob-lem. JCS19