Huma Shah
Coventry University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Huma Shah.
Kybernetes | 2010
Huma Shah; Kevin Warwick
– The purpose of this paper is to consider Turings two tests for machine intelligence: the parallel‐paired, three‐participants game presented in his 1950 paper, and the “jury‐service” one‐to‐one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury‐service tests in the preliminary phase and parallel‐paired in the final phase., – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest., – In the 2008 competition, Turings 30 per cent pass rate is not achieved by any machine in the parallel‐paired tests but Turings modified prediction: “at least in a hundred years time” is remembered., – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaums natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
IEEE Transactions on Computational Intelligence and Ai in Games | 2014
Kevin Warwick; Huma Shah
In this paper, we consider transcripts which originated from a practical series of Turings Imitation Game that was held on June 23, 2012, at Bletchley Park, U.K. In some cases, the tests involved a three-participant simultaneous comparison of two hidden entities, whereas others were the result of a direct two-participant interaction. Each of the transcripts considered here resulted in a human interrogator being fooled, by a machine, into concluding that they had been conversing with a human. Particular features of the conversation are highlighted, successful ploys on the part of each machine are discussed, and likely reasons for the interrogator being fooled are considered. Subsequent feedback from the interrogators involved is also included.
Minds and Machines | 2013
Kevin Warwick; Huma Shah; James H. Moor
A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.
Journal of Experimental and Theoretical Artificial Intelligence | 2015
Kevin Warwick; Huma Shah
This paper presents some important issues on misidentification of human interlocutors in text-based communication during practical Turing tests. The study here presents transcripts in which human judges succumbed to theconfederate effect, misidentifying hidden human foils for machines. An attempt is made to assess the reasons for this. The practical Turing tests in question were held on 23 June 2012 at Bletchley Park, England. A selection of actual full transcripts from the tests is shown and an analysis is given in each case. As a result of these tests, conclusions are drawn with regard to the sort of strategies which can perhaps lead to erroneous conclusions when one is involved as an interrogator. Such results also serve to indicate conversational directions to avoid for those machine designers who wish to create a conversational entity that performs well on the Turing test.
Ai & Society | 2016
Kevin Warwick; Huma Shah
Interpretation of utterances affects an interrogator’s determination of human from machine during live Turing tests. Here, we consider transcripts realised as a result of a series of practical Turing tests that were held on 23 June 2012 at Bletchley Park, England. The focus in this paper is to consider the effects of lying and truth-telling on the human judges by the hidden entities, whether human or a machine. Turing test transcripts provide a glimpse into short text communication, the type that occurs in emails: how does the reader determine truth from the content of a stranger’s textual message? Different types of lying in the conversations are explored, and the judge’s attribution of human or machine is investigated in each test.
Journal of Experimental and Theoretical Artificial Intelligence | 2016
Kevin Warwick; Huma Shah
In this article we consider transcripts that originated from a practical series of Turings Imitation Game that was held on 6 and 7 June 2014 at the Royal Society London. In all cases the tests involved a three-participant simultaneous comparison by an interrogator of two hidden entities, one being a human and the other a machine. Each of the transcripts considered here resulted in a human interrogator being fooled such that they could not make the ‘right identification’, that is, they could not say for certain which was the machine and which was the human. The transcripts presented all involve one machine only, namely ‘Eugene Goostman’, the result being that the machine became the first to pass the Turing test, as set out by Alan Turing, on unrestricted conversation. This is the first time that results from the Royal Society tests have been disclosed and discussed in a paper.
Ai Communications | 2014
Kevin Warwick; Huma Shah
Whilst common sense knowledge has been well researched in terms of intelligence and in particular artificial intelligence, specific, factual knowledge also plays a critical part in practice. When it comes to testing for intelligence, testing for factual knowledge is, in every-day life, frequently used as a front line tool. This paper presents new results which were the outcome of a series of practical Turing tests held on 23rd June 2012 at Bletchley Park, England. The focus of this paper is on the employment of specific knowledge testing by interrogators. Of interest are prejudiced assumptions made by interrogators as to what they believe should be widely known and subsequently the conclusions drawn if an entity does or does not appear to know a particular fact known to the interrogator. The paper is not at all about the performance of machines or hidden humans but rather the strategies based on assumptions of Turing test interrogators. Full, unedited transcripts from the tests are shown for the reader as working examples. As a result, it might be possible to draw critical conclusions with regard to the nature of human concepts of intelligence, in terms of the role played by specific, factual knowledge in our understanding of intelligence, whether this is exhibited by a human or a machine. This is specifically intended as a position paper, firstly by claiming that practicalising Turings test is a useful exercise throwing light on how we humans think, and secondly, by taking a potentially controversial stance, because some interrogators adopt a solipsist questioning style of hidden entities with a view that it is a thinking intelligent human if it thinks like them and knows what they know. The paper is aimed at opening discussion with regard to the different aspects considered.
Ai & Society | 2016
Kevin Warwick; Huma Shah
When judging the capabilities of technology, different humans can have very different perspectives and come to quite diverse conclusions over the same data set. In this paper we consider the capabilities of humans when it comes to judging conversational abilities, as to whether they are conversing with a human or a machine. In particular the issue in question is the importance of human judges interrogating in practical Turing tests. As supportive evidence for this we make use of transcripts which originated from a series of practical Turing’s tests held 6–7 June 2014 at the Royal Society London. Each of the tests involved a 3-participant simultaneous comparison by a judge of two hidden entities, one being a human and the other a machine. Thirty different judges took part in total. Each of the transcripts considered in the paper resulted in a judge being unable to say for certain which was the machine and which was the human. The main point we consider here is the fallibility of humans in deciding whether they are conversing with a machine or a human; hence we are concerned specifically with the decision-making process.
Journal of Experimental and Theoretical Artificial Intelligence | 2017
Kevin Warwick; Huma Shah
Abstract In this paper, we look at a specific issue with practical Turing tests, namely the right of the machine to remain silent during interrogation. In particular, we consider the possibility of a machine passing the Turing test simply by not saying anything. We include a number of transcripts from practical Turing tests in which silence has actually occurred on the part of a hidden entity. Each of the transcripts considered here resulted in a judge being unable to make the ‘right identification’, i.e., they could not say for certain which hidden entity was the machine.
Archive | 2016
Kevin Warwick; Huma Shah
Can you tell the difference between talking to a human and talking to a machine? Or, is it possible to create a machine which is able to converse like a human? In fact, what is it that even makes us human? Turings Imitation Game, commonly known as the Turing Test, is fundamental to the science of artificial intelligence. Involving an interrogator conversing with hidden identities, both human and machine, the test strikes at the heart of any questions about the capacity of machines to behave as humans. While this subject area has shifted dramatically in the last few years, this book offers an up-to-date assessment of Turings Imitation Game, its history, context and implications, all illustrated with practical Turing tests. The contemporary relevance of this topic and the strong emphasis on example transcripts makes this book an ideal companion for undergraduate courses in artificial intelligence, engineering or computer science.