A Popperian Falsification of Artificial Intelligence -- Lighthill Defended
AA Popperian Falsification of Artificial Intelligence - Lighthill Defended
Steven Meyer [email protected]
April 17, 2018
Abstract
The area of computation called artificial intelligence (AI) is falsified by describing aprevious 1972 falsification of AI by British mathematical physicist James Lighthill. HowLighthill’s arguments continue to apply to current AI is explained. It is argued that AI should usethe Popperian scientific method in which it is the duty of scientists to attempt to falsify theoriesand if theories are falsified to replace or modify them. The paper describes the Popperian methodin detail and discusses Paul Nurse’s application of the method to cell biology that also involvesquestions of mechanism and behavior. Arguments used by Lighthill in his original 1972 reportthat falsified AI are discussed. The argument uses recent scholarship to explain Lighthill’sassumptions and to show how the arguments based on those assumptions continue to falsifymodern AI. An important focus of the argument involves Hilbert’s philosophical programme thatdefined knowledge and truth as provable formal sentences. Current AI takes the Hilbertprogramme as dogma beyond criticism while Lighthill as a mid 20th century mathematicalphysicist had abandoned it. The paper explains John von Neumann’s criticism of AI that I claimwas assumed by Lighthill. Next computer chess programs are discussed to show Lighthill’scombinatorial explosion still applies to AI computer programs but not humans. An argumentshowing that Turing Machines (TM) are not the correct description of computation is given. Thepaper concludes by advocating studying computation as Peter Naur’s Dataology.
1. Introduction
This paper applies the method of falsification discovered by Karl Popper to show thatartificial intelligence (AI) programs are not intelligent and in fact are just normal computerprograms in which programmers express their ideas by writing computer code. AI is meaninglessmetaphysics in the Popperian sense of metaphysics based on a number of incorrect assumptionsand dogmas that was falsified by James Lighthill in his evaluation of AI for the British sciencefunding agency (Lighthill[1972]). This paper defends Lighthill’s 20th century falsification of AIand explains how it applies to current AI.Material is presented that the author developed from being encouraged to criticize AI as a1960s Stanford University undergraduate and from a talk given to Paul Feyerabend’s philosophyof science seminar while the author was a computer science (CS in Literature and ScienceSchool) student at UC Berkeley. In order to understand why Lighthill’s criticism falsifies the AIresearch programme and why his arguments still apply to AI now in the second decade of the 21stcentury in spite of vast improvements in computer speed and capacity, it is necessary tounderstand the development of modern computers primarily by physicists after WWII. The paperuses recent historical scholarship to explain Lighthill’s background assumptions and shows howthat background knowledge also falsifies current AI.It was not just Lighthill who was skeptical of AI. Physicists in general are critics of AI. 2 -See for example Roger Penrose’s two books that show the impossibility of artificial minds(Penrose[1994] and Penrose[2016]). There is a file in David Bohm’s archive at Birbeck Collegein London that appears to be Bohm planning to write a paper criticizing AI that I believe wasnever written.The current lack of criticism of the AI research programme may be related to a historicalaccident of academic organization at Stanford University in the middle of the 20th century. Theaccident was that for some reason Stanford decided not to offer academic appointments to SLACphysicists who had academic appointments at the institution they came from. Assistant SLACdirector Mathew Sands discussions this in his American Physical Society (APS) interview(Sands[1987], p. 192). Administrator Albert Bowker who was responsible for starting Stanford’scomputer science as an academic discipline also discusses the problem with SLAC appointments(Bowker[1979], p. 6). Physicists encouraged the study computation. For example, Niklaus Wirthdeveloped his various computer languages while working at and being funded by the StanfordLinear Accelerator (SLAC).The effect of few physicists with academic appointments and I claim William Miller’s, whowas responsible for the Stanford computer science after Bowker had departed from Stanford,incorrect understanding of the questionable intellectual standing of AI was that Stanfordcomputer science became the Stanford AI Lab. For example, assistant CS professor Jeffrey Barthwas fired in 1977 because he refused to work at the Stanford AI Lab. Miller’s explained the ideafor Stanford computer science this way. "I think we tended to focus on fairly rigorous problemsthat could be recognized as rigorous problems. We followed the paradigms of more rigorousdisciplines and established it as a science as opposed to an engineering or applied discipline"(Miller[1979], p. 11). Unfortunately, the result of the Stanford organization resulted in almosttotal suppression of criticism of AI.
2. What is Popperian falsification
Falsification is a method discovered by Karl Popper that argues general statements do nothave scientific merit. Only singular statements Popper calls basic statements that have simplestructure have meaning. Such statements can be disproven either by scientific experiments or bylogic (Popper[1968], p. 74). Popper’s major contribution to the philosophy of science is to insistthat it is the duty of every scientist to criticizes one’s own theories to the fullest extent possible sothat false theories can be modified or replaced. Popperians believe scientific method consists ofnumerous bold conjectures that are then tested and if falsified, eliminated or modified. Popper’smethod calls for bold conjecture followed by stringent criticism.Popper’s original falsification theory developed in the late 1920s and early 1930s is callednaive falsification (Lakatos[1999], pp. 64-85). The theory was improved and generalized byPopper and his colleagues during most of the 20th century. I am using the term Popperianphilosophy in a sense that includes the modifications and improvement to Popper’s theory mostlycarried out at the London School of Economics not just by Popper but also by: Imre Lakatos, PaulFeyerabend and Thomas Kuhn. The other aspects of Popperian methodology is most clearlyexpressed by Imre Lakatos as the
Methodology of Scientific Research Pro grammes (MSRP) (Lakatos[1970]). There were disagreements among the Popperians about questions of emphasisbut not about methodology or importance of rationality in science. James Lighthill, as holder ofthe Lucasian chair in applied mathematics at Cambridge University, was familiar with and part ofthe milieu that developed Popperian theory.It is important to understand that falsification needs to be a necessary condition forscientific research. It is not sufficient because there are situation for which falsified theories need 3 -to be kept because for example there is no acceptable alternative. Lakatos calls this researchprogramme competition (Lakatos[1970]).Falsification as a theory in the philosophy of science is usually discussed in terms ofphysics because the developers were trained as physicists. Physics is possibly not a good fit forstudy of AI methodology because there is no mechanism or functional explanation involved inattempting to understand physical reality (describe fields or particle interactions for example).The connection to cell biology that attempts to understand and utilize the mechanisms of cellbehavior is closer. Paul Nurse in his 2016 Popper Memorial Lecture discusses the importance ofbold conjectures and diligent attempts to eliminate incorrect theory by falsification (Nurse[2016]).Nurse also discussed data analysis in cell biology. For readers unfamiliar with Popperianfalsification, the Nurse lecture provides an excellent introduction.Falsification of AI is important because it is claimed that computational intelligence is nowso successful that discussions of ethical issues involving how inferior humans will deal with thesuperior intellect of AI robots are required. The author believes the primary obligation ofscientists is to eliminate incorrect theories.
3. Lighthill’s falsification of AI
Lighthill’s falsification of AI is quite simple (Lighthill[1972]) and I claim continues toapply to AI in spite of changes mostly in vastly faster computers that execute machineinstructions in parallel and new names for algorithms such as "deep learning" that replaces alphabeta heuristics to improve logic resolution algorithms to implement intelligence. Lighthill arguesAI is just CS described using the language of human intelligence and views computers andcomputation as tools for expressing people’s ideas.Lighthill divides AI into three areas. Category A: Automation (feedback controlengineering), Category C: computer based studies of the central nervous system, and Category B:the bridge area between A and B that is supposedly going to provide the magic synergy thatallows creation of intelligent robots (p. 3). For example, current deep learning would fall intoareas A and B. It falls into category B because it involves automatic logical deduction withoutany need for a person to program ideas into the algorithm, but also it is in category A because it"looks beyond conventional data processing to the problems involved in large-scale data bankingand retrieval" (p. 5). I think Lighthill is arguing here that AI studies normal computer science butrephrases problems in terms of human attributes (p. 7 paragraph 2).According to Lighthill for control engineering it should not matter how the engineering isaccomplished. Lighthill writes in the section discussing category A: "Nevertheless it (AI) mustbe looked at as a natural extension of previous work of automation of human activities, and bejudged by essentially the same criteria" (p. 4 paragraph 4). After more than 40 years of computerdevelopment, programmable digital computers are usually the best choice for control engineering.In modern terms current feedback control engineering is based on improvements in cameratechnology allowing more precise location measurements and more complex feedback. Advancesand cost reductions in computer and storage technology allow large amounts of data to beprocessed faster and at lower cost.In criticizing AI’s approach to area C since obviously it makes sense to studyneurophysiology, Lighthill distinguishes syntactic automation as advocated currently by AI versusconceptual automation (p. 6). He asks if "a device that mimics some human function somehowassists in studying and making a theory of the function of the central nervous system" (p. 6paragraph 4).Lighthill criticizes the use of mathematical logic in AI by arguing practical use runs into a 4 -combinatorial explosion (p. 10 paragraph 5) and argues there are difficulties in storing axiomsfavored by logicians versus heuristic knowledge favored by AI (p. 10, paragraph 6). In my viewthis is the crucial falsifier of AI. Namely, although Lighthill was attempting to provide a neutralassessment of AI, he did not believe in the Hilbert Programme that is the central tenet of AI.Lighthill also discusses organization problems with AI methodology. He questions claimssuch as "robots better than humans by 2000" (p. 13) (now probably replace with 2030). Lighthillas an mathematical physicist also discusses the combinatorial explosion that allows humans tosolve problems that can not be solved by formal algorithms.
In 1972 Lighthill falsified AI by showing its individual claims were false and by arguingthere was no unified subject but rather just normal problems in the area of computation involvingcomputer applications and study of data. AI researchers were not convinced at the time, I think,because Lighthill did not make his Popperian view of science clear. The remainder of this paperdiscusses how 1970s scientific background knowledge especially in the physics and appliedmathematics areas falsifies current AI methods. The discussion is possible because of recentscholarship especially in the areas of Hilbert’s philosophical programme and in the study of Johnvon Neumann’s thinking during the development of digital computers.
4. Skepticism toward Hilbert’s programme of truth as formal proof
In the 1920s, mathematician David Hilbert conjectured that knowledge and truth consistssolely of all sentences that can be proven from axioms. Hilbert’s original conjecture was amathematical problem. However, it was interpreted as a philosophical theory in which truthbecame formal proof from axioms. A paradigmatic example is the Birkhoff and Von Neumannformalization of quantum mechanics as axiomatized logic (Birkhoff[1936], Popper]1968]attempted to falsify it). Hilbert’s programme as the basic assumption of AI is that knowledgeabout the world can be expresses as formal sentences. Knowledge is then expressed as formulasthat can be derived using logic (usually predicate calculus) from other sentences about the worldthat are true.In addition to the belief that knowledge is formal sentences, the foundation of AI is thebelief that the Church’-Turing Thesis (Copeland[2015]) is true. Namely, that nothing can existoutside of formally proven sentences. proven from axioms. In the AI community this dogma isbeyond criticism. However, the philosophical Hilbert programme was abandoned starting in the1930s for various reasons. The reason most often given is that Godel’s incompleteness resultsshowed the Hilbert programme could not succeed. The Hilbert programme is still believed in thelogic area and AI seems to be grasping at straws by attempting to mitigate the Godel disproof byfinding in practice areas where Godel’s results do not apply. Zach[2015] Stanford Encyclopediaof Philosophy article discusses some attempts to mitigate Godel’s results. See Detlefsen[2017])for a more skeptical view of Hilbert’s programme.There were a number of other reasons Hilbert’s philosophical programme was rejected.These other reasons explain why the AI argument that since people have intelligence, computerprograms can also have intelligence. In the view of AI, the problem is just building fastercomputers and developing better algorithms so that computers can discover and learn the formalsentences in people’s heads. In fact the other reasons the Hilbert programme was abandonedshow why Lighthill’s falsification is correct and why AI is meaningless metaphysics. 5 -
During the second half of the 20th century, John von Neumann’s work on computers andcomputations was widely accepted. Publication of Von Neumann’s work on computing did notoccur until years after Lighthill’s falsification was written (in particular Aspray[1990],Neumann[2005] and Kohler[2001]). Lighthill was certainly familiar with Von Neumann’s work.John Von Neumann studied automata and neural networks when he was developing his VonNeumann computer architecture. Von Neumann combined all his skepticism toward linguisticsand automata as sources of AI algorithms in discussing problems with formal neural networkswhen he wrote:
The insight that a formal neuron network can do anything which you can describe in words a veryimportant insight and simplifies matters enormously at low complication levels. It is by no meanscertain that it is a simplification on high complication levels. It is perfectly possible that on highcomplication levels the value of the theorem is in the reverse direction, namely, that you canexpress logics in terms of these efforts and the converse may not be true (Von Neumann[1966],quoted in Aspray[1990], note 94, p. 321).
Von Neumann also considered and rejected current AI methodology when he developed theVon Neumann computer architecture. In a 1946 paper with Herman Goldstine on the design of adigital computer Von Neumann wrote that some sort of intuition had to be built into programsinstead of using brute force searching (Aspray[1990], p. 62). Edward Kohler (Kohler[2000]), p.118) describes von Neumann’s discovery in developing modern computer architecture in anarticle "Why von Neumann Rejected Carnap’s Duality of Information Concepts" as:
Most readers are tempted to regard the claim as trivial that automata can simulate arbitrarilycomplex behavior, assuming it is described exactly enough. But in fact, describing behaviorexactly in the first place constitutes genuine scientific creativity. It is just such a prima faciesuperficial task which von Neumann achieved in his [1945] famous explication of the "vonNeumann machine" regarded as the standard arc hitecture for most post World-War-II computers.
The problem context in the area of operations research solution space searching thatinfluenced both von Neumann and Lighthill was pre computer algorithmic operations research(see Budiansky[2013] for the detailed story). Understanding the limitations of combinatorialexplosion arises naturally from that experience.
Starting with Ludwig Wittgenstein in the late 1930s, skepticism toward linguistics andespecially formal languages become prevalent. Wittgenstein’s claim was that mathematical (andother) language was nothing more than pointing (Wittgenstein[1930]). The Popperians andEnglish science in general were receptive to Wittgenstein and his "pointing" philosophy ofmathematics. Popperians avoided linguistic philosophy because they viewed it as creating moreproblems than it solved. I read Lighthill’s falsification as assuming this attitude toward language.Modern AI still claims knowledge and truth is limited to provable formal sentences.
5. Physicist skepticism towards mathematics as axiomatized logic
In my view there was a more important reason for the rejection of Hilbert’s programme.Physicists were always skeptical toward axiomatized mathematics. Albert Einstein in his 1921lecture on geometry expresses this skepticism. Einstein believed that formal mathematics was 6 -incomplete and disconnected from physical reality. Einstein stated:
This view of axioms, advocated by modern axiomatics, purges mathematics of all extraneouselements. ... such an expurgated exposition of mathematics makes it also evident that mathematicsas such cannot predicate anything about objects of our intuition or real objects (Einstein[1921]).
Niels Bohr argued that first comes the conceptual theory then the calculation. John von Neumannexpressed the physicist attitude with a story relating a conversation with founder of quantumphysics Wolfgang Pauli. "If a mathematical proof is what matters in physics, you would be agreat physicist" (Thirring[2001], p. 5).
6. Finsler’s rejection of axiomatics and general 1926 inconsistency result
In addition to skepticism toward axiomatics, there was also skepticism toward set theoryand its core claim that only sentences that are derivable from axioms (Zermelo Fraenkel probably)can exist. Swiss mathematician Paul Finsler believed that mathematics exists outside of language(formal sentences). Finsler claimed to have shown incompleteness in formal systems beforeGodel in 1925 and that his proof was superior because it was not tied to Russell’s logic as Godel’swas. See "A Restoration the failed: Paul Finsler’s theory of sets" in Breger[1996], p. 257 fordiscussion of Finsler’s result on undecidability and formal proofs and its history (alsoFinsler[1996] and Finsler[1969].
7. Chess - elite human players response to chess programs
Superiority of chess programs over even the best human chess players is cited as evidencethat in the future AI robots will be superior in all areas involving intelligence. In fact the situationis more complicated. The world’s best chess players are responding in interesting ways. Thiscorroborates Lighthill’s claims that even in a formal sentenced based toy world, combinatorialexplosion limits problem solving ability of algorithms. Study of chess playing programs andevaluation of their efficacy show the problems with recent claims of AI successes in general.In 1997, the Deep Blue chess program defeated then world champion Gary Kasperov.Since then the world’s best chess players have adjusted to computer chess programs. In theDecember 31 Financial Times newspaper chess column, Leonard Barton referring to USchampion Fabiano Caruana writes: "The US champion and world No. 2 unleashed a brilliantopening novelty, which incidentally showed the limitations of the most powerful computers"(Barton[2016]).In the October 14, 2017 Financial Times weekend edition, Barton discusses newerresponses of the best chess players to computer chess programs. The best human chess playersare changing to what are seemingly inferior opening such as Magnus Carlsen’s A3 (left rookpawn advances one square) because "Grandmasters are turning to the byways of opening theoryas powerful programs analyze main lines to a depth unimaginable before the age of computers."Computer chess program "intelligence" is not way beyond human skill, but computers are a toolthat allows large improvement in human ability to analyze chess moves. This is similar tomicroscopes as tools that allow understanding biological cells in previously impossible ways.Also, US champion Fabio Caruana is still in the for front of using computers as a tool toanalyze positions. He found a variation on a well established opening "Caruana found a nuance atmove 19 which was so strong that he had a won game while still in his prep." Barton sums up thereaction to computers as "Carlson’s message is clear. Offbeat openings can save a lot of wastedpreparation."It has taken two decades and Caruana was only five years old when Kasperov lost to Deep 7 -Blue, but it appears computer algorithms will encounter combinatorial explosion problems so thatmore and more of the best players will be able to defeat computers.In May 2017, Garry Kasperov published a book on his 1997 match against Deep Blue"Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins"(Kasperov[2017]). Kasperov blames the Deep Blue team psychological harassment for his loss. Itwas Kasperov versus a group of chess masters with access to a very good chess position analysistool versus Kasparov alone in Kasperov’s view. This illustrates a common pattern in computersolving of problems. Injection of human knowledge by means of writing computer code is anintegral part of the machine learning.Possibly more interesting is how the claims show problems with AI scientific methodologyand emphasize the lack of diligent attempts to falsify AI theory. First, the financial incentivestructure of the challenge meant that Kasperov made more money by losing rather than bywinning. From Kasperov’s viewpoint he could win and go back to collecting meager chesstournament prize money or lose and collect a large appearance fee plus receiving numerous otherappearance fees as a marketing representative. Many AI claims of success involving humancompetition with computers follow this pattern. At a minimum, AI tests of this type need to usedouble blind protocols. A better method for determining if computers can defeat the best humanplayers would be to use double blind tournaments where opponents may be humans or computersand participants and officials were not allowed to know who was who. Even better would be asystem where chess player’s natural competitiveness was utilized so that losing to a lower ratedhuman player would result in a large deduction of rating points.Finally, progress in chess playing computer programs shows that chess programs arenormal data processing applications in the Lighthill sense in which human knowledge of chesscan be expressed and amplified by injecting it into computers by writing programs.
8. Turing Machine incorrect model for computation
The central argument for AI is based on the Church Turing thesis. Namely that Turingmachines (TM) are universal and anything that involves intelligence can be calculated by TMs.Applying Lighthill’s combinatorial explosion arguments, it seems to me that TMs are the wrongmodel of computation. Instead a different computational model called MRAMS (random accessmachines with unit multiply and a bounded number of unbounded size memory cells) is a bettermodel of computation (Meyer[2016]). Von Neumann understood the need for random accessmemory in his design of the von Neumann architecture (ibid. pp. 5-6). For MRAM machinesdeterministic and non deterministic computations are both solvable in polynomial bound time soat least for some problems in the class NP, the combinatorial explosion is mitigated. TM’s areuniversal in the sense that they can compute anything that a von Neumann MRAM machine cancalculate. The problem is that Lighthill’s combinatorial explosion problem is much worse forTMs than MRAMs because for TMs, problems in NP are probably not computable in polynomialbounded time, but for MRAMs P=NP so there is no advantage to guessing or using heuristics. Iclaim von Neumann understood that random access and bit select capability that is missing fromTMs leads to more problems that can avoid the combinatorial explosion problem.A problem such as asking if two regular expressions are equivalent is outside the class NPso the calculation is inherently exponential implying that for algorithms there is no solution to thecombinatorial explosion problem. This suggests that algorithms should be studied as normal dataprocessing because AI’s assumption that heuristics and guessing will somehow improvealgorithms is problematic. 8 -
9. Conclusion - suggestion to replace AI with Naur’s Dataology
A problem with this paper is that people trained to perform advanced computationalresearch before the 1970s primarily by physicists can’t imagine AI as having any content, butpeople trained after CS became formalized as object oriented programming, computer programsverified by correctness proofs and axiomatized proofs of algorithm efficiency can’t imagineanything but computation as formalized logic. Computation researchers trained after the 1970sare unable to imagine alternatives to the AI dogmas. My suggestion is to adopt the ideas ofDanish computer scientist, who was trained as an astronomer, Peter Naur. Naur argued thatcomputation should be studied as dataology. dataology is a theory neutral term for studying data.Naur wrote "mental life during the twentieth century has become entirely misguided into anideological position such that only discussions that adopt the computer inspired form" areaccepted. (Naur[2007], 87).In the 1990s, Peter Naur, one of the founders of computer science, realized that CS hadbecome too much formal mathematics separated from reality. Naur advocated the importance ofprogrammer specific program development that does not use preconceptions. I would put it ascomputation allows people to express their ideas by writing computer programs.The clearest explanation for Naur’s method appears in the book
Conversations - Pluralismin Software Engineering (Naur[2011]). This books amplifies the program development methodNaur described in his 2005 Turing Award lecture (Naur[2007]). In Naur[2011] page 30, theinterviewer asks "... you basically say that there are no foundations, there is no such thing ascomputer science, and we must not formalize for the sake of formalization alone." Naur answers,"I am not sure I see it this way. I see these techniques as tools which are applicable in somecases, but which definitely are not basic in any sense." Naur continues (p. 44) "The programmerhas to realize what these alternatives are and then choose the one that suits his understanding best.This has nothing to do with formal proofs." dataology without preconceptions and predictions ofimminent replacement of human intelligence by robots would improve the scientific study ofcomputation. The next step for advocates of AI would be to try to falsify Naur’s dataology.
10. References
Aspray[1990] Aspray, W.
John von Neumann and The Origins of Modern Computing.
MIT Press, 1990.Barton[2016] Barton, L. Chess column,
Financial Times.
Games page weekend life andstyle section, Dec. 30, 2016 and Oct. 14, 2017 editions.Birkhoff[1936] Birkhoff, G. and Von Neumann, J. The Logic of Quantum Mechanics.
Annals of Math. 27, no. 4 (1936),
Revolutions in Mathematics.
Oxford, 1992, 249-264.Budiansky[2013] Budiansky, S.
Blackett’s War: The Men Who Defeated the Nazi U-Boatsand Brought Science to the Art of Warfare.
Knopf, 2013.Copeland[2015] Copeland, J. The Church-Turing thesis.
The Stanford Encyclopedia ofPhilosophy (Summer 2015 Edition), plato.stanford.edu/archives/sum2015/entries/church-turing
Detlefsen[2017] Detlefsen, M. Hilbert’s programme and formalism.
RoutledgeEncyclopedia of Philosophy.
March 2018 URL: formalism/v-1
Einstein[1921] Einstein, A. Geometry and Experience.
Lecture before Prussian Academyof Sciences.
Berlin, January 27, 1921, March 2018 URL:
Finsler[1969] Finsler, P. Ueber die Unabhaengigkeit der Continuumshypothese.
Dialectica 23 , 1969, 67-78.Finsler[1996] Finsler, P.
Finsler set theory: Platonism and circularity.
D. Booth and R.Ziegler eds. Birkhauser, 1996.Kasperov[2017] Kasperov, G.
Deep Thinking: Where Machine Intelligence Ends andHuman Creativity Begins.
Public Affairs, 2017.Kohler[2001] Kohler, E. Why Von Neumann Rejected Carnap’s Duality of InformationConcepts. In Redei, M. and Stoltzner, M. eds.
John von Neumann and theFoundations of Quantum Physics.
Vienna Circle Institute Yearbook 8,Kluwer, 2001, 97-134.Lakatos[1970] Lakatos, I. Falsification and the methodology of scientific researchprogrammes. in I. Lakatos and A. Musgrave eds.
Criticism and the growthof knowledge. , Cambridge Press, 1970, 91-196.Lakatos[1999] Lakatos, I. and Feyerabend P.
For and against method.
M. Motterlini ed.University of Chicago Press, 1999.Lighthill[1973] Lighthill, J. "Artificial Intelligence: A General Survey" in
ArtificialIntelligence: a paper symposium.
UK Science Research Council, 1973.URL March 2018:
Meyer[2013] Meyer, S. Adding Methodological Testing to Naur’s Anti-formalism.IACAP 2013 Proceedings, College Park Maryland, 2013.Meyer[2016] Meyer, S. Philosophical Solution to P=?NP: P is equal to NP,
ArXiv:1603:06018,
Knowing and the mystique of logic and rules.
Kluwer Academic,1995.Naur[2005] Naur, P., 2005. "Computing as science", in
An anatomy of human mentallife. naur.com Publishing, Appendix 2, 208-217, 2005. URL March 2018:
Naur[2007] Naur, P. Computing versus human thinking.
Comm. ACM 50(1),
Conversations - pluralism in software engineering.
E. Daylight ed.Belgium:Lonely Scholar Publishing, 2011.Neumann[2005] von Neumann, J. Redei, M. ed.
John Von Neumann: Selected Letters.
History of Mathematics Series, Vol. 27, American Mathematical Society,2005.Nurse[2016] Nurse, P. "How Philosophy Drives Discovery: A scientists view of Popper"2016 Popper Memorial Lecture, London School of Economics Podcast,2016. URL March 2018: http://richmedia.lse.ac.uk/publiclecturesandevents/20160928_1830_howPhilosophyDrivesDiscovery.mp3
Penrose[1994] Penrose, R.
Shadows of the mind: a search for the missing science ofconsciousness.
Oxford University Press, 1994.Penrose[2016] Penrose, R.
The Emperor’s New Mind: Concerning Computers, Minds,
10 - and the Laws of Physics.
Oxford University Press, revised edition, 2016.Popper[1934] Popper, K.
The Logic of Scientific Discovery.
Harper Row, 1968 (originalin German 1934).Popper[1968] Popper, K. Birkhoff and Von Neumann’s Interpretation of QuantumMechanics.
Nature 219(1968),
John von Neumann and the Foundationsof Quantum Physics.
Vienna Circle Institute Yearbook 8, Kluwer, 2001,5-10.Wittgenstein[1939] Wittgenstein, L.
Wittgenstein’s Lectures on the foundations of mathematicsCambridge 1939.
C. Diamond ed. University of Chicago Press, 1939.Zach[2015] Zach, R. Hilberts Program.
The Stanford Encyclopedia of Philosophy(Spring 2016 Edition),2016. URL March 2018: