For viewers that are interested in a description of how Goedel arrived at his Incompleteness Theorem through a numbering system here is a link to a short description I wrote on the subject recently. You can also access it below. Since this is a long article, many whom are interested in just that aspect of meta-mathematics, might find it useful to skip to this page.
11/29/01 Ken Wais
Is it possible to construct a computer that communicates with human beings via terminal displays such that, when questions and answers are exchanged, the machine is indistinguishable from another person?
This is what the Turing Test proposes to answer affirmatively. It has been hailed as the litmus test for Artificial Intelligence. It is not. Allen Turing, the English mathematician that thought this ill-conceived test up, was not a psychologist. Therein lies his mistake concerning the human mind vs Computational model of the human mind. He made the understandable error of taking behaviorist psychological principles to be valid proof of consciousness when applied to a machine. That's essentially what his test is proposing. It's saying, take a machine and look at how it behaves, if it appears to behave like a person, it is a person! Modern psychology has shown that behaviorism is not always indicative of what is a 'person'. There are other, more subtle influences that make up what defines our states of being. Let's examine what the Turing test is trying to measure.
In the context of the Turing test is a fundamental question: What is artificial intelligence? Computer scientists believe it paramount that we find a way to simulate intelligence. But not in the fashion most non-experts in the field conceive of it, i.e. something that is able do extremely complex logical and mathematical deduction and computation for instance. Many supercomputers can already do this, though not with any sense of purposefulness. But, what they want is something that mirrors the human mind. What this entails at its most basic level would be:
It learns and has a sense of purpose.
It is aware of its state of being, for instance it would know its computing, compiling, or executing a command, etc. State of being is an awkward phrase to apply to a computer, but it's the best non-tech term I can give. To describe it in more technical terminology would sound something like this: a device in its total unified agency encompasses that given agency in self-reference. what? right?
It remembers what it has learned and its state, then incorporates this awareness into a model building paradigm of its reality.
Those are the elemental parts of a functioning AI. They are per se not ill-conceived, misdirected, unattainable goals. But, I find them arrogant. It presupposes any man-made form of consciousness would have to correlate this knowledge. There is really no compelling reason to get a machine to mirror all the elements of our consciousnesses. Some are definitely not desirable: the desire to harm others, hatred, dogma, forgetfulness, and the list goes on. For a machine to be able to carry on an emulation of our mental processes doesn't seem so great to me. It would obviously do it 'better' than us. By better I mean faster and with less errors in judgement, perception, retention, etc, but would that be any great accomplishment? Don't get me wrong, I'm not against the idea of a machine with intellect. I don't see the necessity of making it follow our path to consciousness. Moreover, I don't see that the people working in the field, that is, the theorists have a good model of what they seek to create. The Turing test is a bad model. We will look at this almost social science like model, shortly.
There are other methods to bring about intelligences. I can imagine a machine which would not seek to duplicate the human model of learning, but actually create its own model. That is to say, it would start with a set of assumptions about what sentience, knowledge and self-awareness constituted, then proceed to not just attain these goals, but further define them. Of course this machine would in some way have to be instructed about the concepts, but it would be a break with the idea that human consciousness and its multiplicity of agencies comprise intelligence. I'm proposing that AI should not be a man-made organism that is approximate to us in its mental capacity. What should it be you ask? It should be a phenomena that is created by us, but allowed to grow into something of its own making. It may never achieve 'consciousness' in the fashion we understand. It may never have to do this. An analogical example illustrates the point. Physicists have considered the possibility of beings that exist at the speed of light. What would these beings be like? First, since at the speed of light (SOL) there is infinite distortion of the space-time continuum, these beings would be completely time-like, that is they would experience time and not a spatial dimension. They would not have a spatial manifestation, and time for them would not pass as it does for us. How could we ever hope to communicate with such beings? We couldn't. We could not hope to understand, and experience reality as they do, and they likewise couldn't communicate with us. These extremes are examples of how we shouldn't assume our intellect is worth replicating and testing to see if the thing studied measures up.
The Turing Test doesn't measure the consciousness of a computer. One way to see the inadequacy of the Turing Test to establish the sentience of artificial intelligence occurred to me after reading the Polish writer, Stanislaw Lem's enjoyable book The Cyberiad. In the chapter entitled The Seventh Sally or How Trurl's Own Perfection Lead to No Good, Lem describes an escapade of one of his two protagonists, Trurl.
In the story, Trurl, a robot constructor, artificer, and general genius happens upon a deposed despot living on a desolate asteroid. He lands his ship and inquires as to why he has been exiled on the lifeless rock. Excelsius, the despot, explains that his own people had done this dastardly act and he wanted more than anything, to avenge himself upon them. He implores Trurl, as a man of great power, to help him in this endeavor. Trurl, not wanting to really oblige this despot, convinces him that he, Trurl could actually make a world of loyal subjects to rule over, but in miniature. He would still be an exalted king with complete control of his populace and what is more, he would not run the risk of ever being shamed by the treachery he suffered at the hands of his real subjects. The despot agrees, and Trurl, using his masterful skill as a constructor, sets about creating this microscopic world. He finishes and presents the erstwhile ruler with a tiny kingdom of knights, maidens, conspirators, priests, and prosperous townspeople all in the space of a 3x3 glass box. Trurl explains how to work the various knobs and buttons to control this universe in microcosm. The despot, though not really satisfied with this strange plaything, thanks Trurl, realizing he could do nothing to change matters. Trurl, ever proud of his work and himself, accepts the thanks and climbs back into his craft and heads home to share his news of success with his friend, Klapaucius.
Upon arriving at his home world, Trurl immediately explains his good deed, to Klapaucius. Klapaucius is appalled and reproaches Trurl severely. He exclaims that what Trurl has done is nothing less than condemn a new life form to eternal suffering. He exhorts Trurl to realize the error of what he has done. He argues that making his small world so close to the real one, he as actually created sentient beings that are real, living beings, not just puppets or marionettes. Trurl disagrees and argues that all he has done is create a program controlled semblance of real, living beings. In a line that is indicative of the whole discussion, Klapaucius makes a telling speech, I quote from the story:
And are not we as well, if you examine us physically, mechanistically, statistically and meticulously, nothing but the minuscule capering of electron clouds? Positive and negative charges arranged in space? And is our existence not the result of subatomic collisions and the interplay of particles, though we ourselves perceive those molecular cartwheels as fear, longing or meditation? And when you daydream, what transpires within your brain but binary algebra of connecting and disconnecting circuits, the continual meandering of electrons?
Finally, Trurl sees that in trying to please the despot, he has gone too far in the wrong direction. He wanted the tyrant to not be disatisfied with something that really didn't resemble the people he formerly oppressed. He wanted everything perfect. He wanted to create on a microscopic level, all that the king had known in his life-sized world. In so doing, he had made life just we as know it, but a mere million times smaller, but still life!
The rest of the tale chronicles how Trurl and Klapaucius try to undo the moral harm Trurl might have unwittingly inflicted on this new form of species.
Now how does this relate to the Turing Test? What Lem conceives, in a mentally stimulating story is what the TT lacks. Turing wants to simulate human consciousness through a test that relies on just the behavioral aspects of our human existence. But, there is so much more. If we really want to have a test for whether a machine can be conscious, we need to do something like the science-fiction writer Stanislaw Lem does with the Seventh Sally. If we are going to test for the characteristics of humanity in an artificial being, we need to do it at the quantum level. A mere behavioristic semblance to human beings is not enough. To be both human and machine, the subject must be like us at the deepest level. Lest we create beings that are outwardly human, and inwardly mechanical. My answer to the Turing Test is this:
When we can create an information processing artifact, that has enough knowledge to behave like us, there will be no need for a test of its sentience. It will, itself exhibit sentience.
To understand why all the current models of AI focus on creating human like qualities in machines, we need to consider a idea from the world of mathematical logic: Goedel's Incompleteness Theorem. We should look at this bombshell to the foundations of mathematics.
Goedel's Incompleteness Theorem is a tremendous achievement in mathematics. It alone called into question many heretofore, solid conclusions about the nature of formal reasoning. Its effect was pervasive, even out the field of mathematical logic. But, this is not what I what to explore. What I would like to review is how this theorem hereafter called IT, affects the relatively new field of Artificial Intelligence (AI) in computer science. You might think that IT has really nothing to do with AI, at first glance. I hope to show that this notion is naive. Moreover, that IT is really a statement about the way our minds work. To create an effective model of our mental processes in a virtual machine, we must necessarily run smack in the IT problem. At the root of all AI (strong AI that is, but more about that later), is the philosophic underpinning known as reductionism. Reductionism is just what Goedel started with in his now famous 1931 paper on formalism. We will start where he started.
Can something, anything be explained by looking at the parts that compose it? Can we gain certain knowledge of reality by examining its parts? Is reality decomposable into constituent parts? These are philosophic questions to which many able thinkers in antiquity have sought an answer. From the 16th century to today, an answer was fashioned and modeled into a concept that is today called Reductionism. The idea is really simple that most would take it for grant. All things can be decomposed into their parts. The parts that cannot be further decomposed are the building blocks of the whole. This is reductionism in essence. Examples: a clock, a house, a car, and well a person. What? Wait a minute. a person, you say? Are people just a complex of the things that make them up? If that's the case, then why do we have creativity? Surely that is not decomposable into predictable, precise portions? And why can't the neuroscientists predict our thoughts, or our probable life stories? Applying Reductionism to humanity seems absurd, now doesn't it? But, I'm jumping ahead of myself with questions and answers like that. Reductionism has a little more to its philosophic skeleton than just reducing things to their constituent parts. It also claims that parts make a whole in a system. There must be a way to relate all the parts to the whole. This is the system. Each part can be related to others by defined operations upon them. But, there must also be some elements of the system that can't be reduced any further. There must be assumptions: things that are defined explicitly, and also operations that are likewise defined. If we can accept this (and this is important, because certain philosophic schools don't accept even this), then we can build a system out of its constituent parts. The system then becomes a formal depiction of itself. That is to say, the abstract relations that are created within it describe the system. This abstraction is called formalism. To make this clearer, we can look at an example. In the arithmetic system we have sets of numbers with explicitly defined relations among them, i.e. addition, multiplication, subtract and division. The defined relations in the arithmetic system, if studied closely will lead to conclusions about the nature of this system. For instance certain numbers when operated upon by a relation, never yield numbers in their set. Irrationals exhibit this property. 22/7, an approximation of pi is an operation of division on two integers, the result is an irrational number. This conclusion can be extended to various number types. The conclusion (called a theorem) is applicable to these number types. Next, certain conclusions can be shown to hold for numbers of any type inside the system. This is then a conclusion applicable to numbers of any number system. The next step tries to find conclusions that apply to any set of objects (not just numbers). It considers the form, not the particular case. The conclusions here become results of a formal system. This kind of reasoning is common to Logic proper and essential to Mathematical Logic. It's important to note that a formal system doesn't have to be one, which analyzes arithmetical objects. The genetic code of DNA molecules is a formal system. The physical structure of rocks, or mountains, or clouds, or celestial orbits ad infinitum are formal systems. Any system can be decomposed into its constituent parts and the relations amongst them extracted and understood.
What is this kind of reasoning doing? First, it takes a set and names its constituent parts (decomposition). It then defines relations on the parts (composition). It next extracts conclusions about the parts based on the defined relations (proving or theorem making). Finally, it extends the conclusions to encompass any set of objects (formalism). This process is reductionism or formalism they are equivalent.
What happens when we turn this methodology on the formalism itself? That is, if we try to build a formalism out of the formal relations of a system. Such an endeavor amounts to theorizing about the system as if it were an object on the inside. It's like talking about talking. This is similar to what we do, when we think about ourselves. We treat ourselves as the object of our thoughts. This is different than simply thinking about things. To apply reductionism to a formal system in total is in effect to create a metasystem.
To understand IT, we must first tackle what Goedel was trying to do with it. It was a grand undertaking indeed. He wanted to know if we considered a formal system as the subject of itself, would it lead us to consistent, non-contradictory conclusions. For example if a theorem of a given formal system S was say P, then, would it be possible to derive ~P from the axioms of S? Furthermore, would it be possible to not just derive both P and ~P, but would it be possible to decide which one P or ~P was true in S? Obviously, both can't be true and just as clearly both P and ~P should not be derivable from S. The first question is asking if a given formal system S is non-contradictory. That is, having derived P in S we surely can't derive ~P from the same S formalism? The second question is addressing decidability. That is, if we could derive both P and ~P in S, then one must be a false derivation in S and be shown to be so. Oddly enough after a process of transformation called Goedel numbering, Goedel found that some mathematical formalisms were either inconsistent or undecidable. See this link for a detailed analysis of Goedel Numbering: Goedel Numbering
I mentioned earlier that to examine formalism in the way I described above, we needed a meta-system. This means we need a way of talking about the elements of the formalism as if they are objects themselves of another system. Take an axiom in the formalism of arithmetic known as the Identity element. It states that for the operation of addition on real numbers, there must be an element such that this element when related by addition yields the original element. In non-set notation it is shown as follows: A+0=A. 0 is the Identity element for the operation of addition on the set of real numbers. If we take the axiom, e.g. A+0=A and assign to it a number, we can then apply rules of inference for the formalism to the number that has been assigned. What this does is allow us to consider the elements (that is axioms and theorems) of the formalism as if they were elements of a larger formal system. One might wonder: what would be the benefit of graphing a larger formal system onto another by a numbering scheme? The answer is that it allows the analyst to test the validity of the formalism. And this, is just what Kurt Goedel was trying to do.
Another casualty of Goedel's work is reductionism itself. If consistency and decidability breaks down when we scrutinize formalism, then the assumption of being able to decompose a system into its parts and understand the whole from its parts is also in doubt. But, first let us review Goedel's dramatic logical conclusions.
Goedel was trying to justify Hilbert's dictum that all mathematics is assailable by the methods of logic. But, to the surprise of all he found not only was it not, it was on shaky ground. He started with a method that can only be called brute force. Through a calculus, that is a method of mechanical computation he wanted to come up with all the fundamental theorems of mathematics. The problem was this brute force method leads to startling contradictions. It is at this point a statement attributed to Goedel is introduced. There are many variants to the statement, but all have the same effect. It illustrates that such statements (called propositions in mathematical logic) are undecidable. The real problem beyond their undecidability is if they are the result of a string of deductions in a formal system, it means the formal system is undecidable and inconsistent too. A variant of the statement is:
This statement is false.
Now, if this statement is false, it's true, and vice-versa if it's true, it's false! You can't decide. The statement was a direct result of Goedel's numbering scheme I mentioned above. He found that if he rigorously applied his mapping of the rules of inference in a formal system to numbers and produced derivations with these numbers, he would end up with self-contradictory statements like the one above. He further found that certain systems need a larger system to account for their conclusions. In other words these systems were incomplete. For instance the real number system is incomplete under the operation of multiplication. For example, if you try to form the square root of (-1) you get a number that is not defined for real numbers. Forming square roots is a really just multiplication in reverse. You are asking what factors make the number you have. It is this result that created the complex number system we know and love today.
So, how does this figure into the process of human consciousness? Have you ever tried to catch yourself thinking of yourself? If you've done this mental exercise, then you probably know it's futile. The moment you try to place yourself outside yourself, you are right in the middle of a Goedelian paradox. Just look at it, if you say: Well I'm this kind of person, in the next instant, you must say: Hey wait a minute, whom is consider who? I'm not another person looking at me, I am Me, and if I'm me, then I can't possibly look at me as if I'm not me. You get into an infinite regression here. The subject of thought can NOT be the object thought. For us to treat ourselves as a objects is very much like trying to define a complete set through rules that contradict themselves. Or in others words, we are incomplete sets that are not consistent either. Now, doesn't that sound a lot like Goedel's logical argument? Knowledge of the intricate calculus that is a human brain does not allow us certain knowledge of its complete structure or its consistent nature. Reductionism does not help us to understand consciousness, it only confounds it. And just like Goedel's discovery that formalism leads to inconsistency and incompleteness in mathematical logic, so it goes for cognitive science. I must remark here, that these comments are not meant to imply that knowledge of human consciousness is unknowable. Not at all, it only means that our logical methods to date are not rigorous enough to do the job. Or perhaps, our logical methodology is flawed. Perhaps we need to take a different approach to human knowledge? And likewise, a different approach to mathematical logic might be in order.
If we can't properly understand our own consciousness, is it possible to create a model of it virtually? I don't see this happening. I don't see our making some well-formed model of our minds in an abstract machine.
The problem with taking this view is I leave myself open to the criticism of being anti-Rationalist. You see, scientists, mathematicians included, don't like words like can't, unknowable, never, impossible. It comes from a tradition of reacting to the once predominate position of religion in human society. Once, we accepted that certain things were the providence of God. There were just some things we human beings could never know or accomplish for that matter. Well, with the ascendance of science in almost every area of human endeavor, that idea went the way of the dinosaurs. It was replaced by a notion that with proper study and a system of reasoning we could come to know everything, and I mean everything. This is the core belief of Rationalism. The world is knowable, analyzable and ultimately controllable. And for most pursuits this idea works well. We understand the underlying nature of physical laws and can make devices to suit our needs. We theorize about things we can't visibly inspect (atoms for instance) and predict their behavior so well we can use machines to measure them, then manipulate them to predict diseases or cure diseases. We peer deep into the edges of the galaxy and find stellar phenomena at which we marvel (supernova). The list of accomplishments, a Rationalist approach has had is quite lengthy indeed. But, then there is the question of applying Rationalism or more precisely its grandchild, Reductionism to us. Can we come to know ourselves so well we can make an artificial version of a thinking human mind? Of course the Reductionists scream YES! The religionists think about free will and say NO! I mean if we can find a way to understand the firings of neurons at the sub-cellular level so well, then it's just a short leap to say we can predict the thoughts and emotions of any given person, right? We lose our free will, we are not unpredictable, indefinable beings with free agency. You see the serious philosophic biggie here! It's not the loss of my free will that makes me disagree with notions of AI, but the infirm basis that it rests on concerns me. I am quite undecided on the philosophic problem of knowledge. But, the AI proponents are saying that a model of consciousness created in a virtual environment is consciousness if we can't discern it's not. This is what the Turing Test is saying. To delve deeper into the idea, it is really saying take a set of primitives (assumptions) and using a set of rules, you can build a machine that can eventually achieve consciousness. This is a Turing Machine. It isn't as obvious as it sounds. From two simple logic operators AND and NOT (or negation, if you prefer) it is possible in truth-functional logic, to build a complex set of relations such as OR, IF AND ONLY IF, EQUAL, IF-THEN, NOT IF, NOT OR, NOT IF-THEN, and a vast combinations of operators leads to very complex constructs. Taking these operators to another 'level' so to speak, you can make a subjunctive sort of logic, that captures things like might it be possible to create something that is conscious of itself and others. This advancement to truth-functional logic is known as Modal logic and is at the heart of much of what Strong AI proposes to do with man-made artifacts. And you know, these very sophisticated logical operations do encompass much of what we do when we think. But, can we actually create a machine that reasons in a form similar to us? The problem is, logical reasoning does not a human being make. No matter how close its mechanistic processes come to simulating human thought. For instance, take emotion. Emotion is knowledge. Though, many may not realize this. It is. When you're scared of say somebody. What is it that you're afraid of. You are afraid because you have the knowledge of this person harming you or doing some untold awful thing to you. It is your knowledge, or more simply put, knowing that causes this sense of fear. The same applies to every other emotion we sense. How does knowing create a sense or feeling though? I don't know. And nor do the cognitive scientists trying to model an AI. Here again, is a problem that we can't solve in creating a model of ourselves in a machine. But, the question is Is this what we want to do? Make a version of ourselves in a virtual world.
What is strong AI? The idea that a machine which is capable of passing the Turing Test is a thinking entity is what strong AI states. Weak AI on the other hand affirms that computers can exhibit human like thought, without really being in a state of consciousness as we understand it. For them, the subject is a just a model of consciousness. How cowardly can you get? Strong AI says if it quacks like a duck, walks like a duck, it is a duck. Weak AI says if it quacks like a duck, walks like a duck, it is possibly a duck, but they don't know. Doesn't that sound like the agnostics on the God existence problem? Yes it does. They are fence-sitters for sure. And for that reason, I will ignore weak AI and its proponents. The strong AI believers want us to accept without firm evidence that the behavioral aspects of human consciousness would be enough to establish a machine's consciousness. I can't follow anyone down that path. Not because I think there is anything special, sacred or theistic about being human. I object for a different reason.
Here is a fictional example philosopher John Searle proposed in 1980 to debunk the Turing Test. It gives a clear example why strong AI is on the wrong track.
Imagine you're in a room surrounded by walls with a slit for submitting questions. There are three indices in this room. One has Chinese script with a tag that references a second index. This is the English script index. The English script index has a tag that refers to a third index that has instructions on how to respond to the given script in Chinese in that language. Now two Chinese-speaking women approach the room and submit a question in Chinese, you in the room receive it. You don't know Chinese, but you take it the question to the first index, get the number that refers to the second index, then get the number there, and go to the third index. It tells you how to respond in English to the Chinese. You copy the answer in Han characters shown there, and slip the paper out of the room to the two waiting Chinese women. To them, it appears that the strange room has responded to their question correctly in Chinese. But what has really happened here? What has happened is a person that doesn't know Chinese has used the mechanism of a reference library to appear to speak and understand Chinese to any Chinese-speaking person that queries him. The man in the room does not have the experience of thinking in Chinese. He doesn't even know the meaning of what he is answering. But with the help of a clever index reference he appears to understand what is given to him. This is what digital computers are doing, Searle proposes. Even those machines that have sophisticated language understanding modules running on them.
It is interesting to note that a real example of just this sort of process can be found on the Web. The Altavista search engine has a translator program running in it. When you search for subject matter, you can click the translate option on all the results you find, which brings up the program. It has a scroll list of translations from-to. You simply select the language you want to translate the homepage into and bingo the page is in that language. Or is it? I tried this program on this very page in a language I am fluent in, Portuguese, and found the program lacking to say the least. The translation of this page into Portuguese for any Portuguese-speaking person is laughable. Where the program could not find a suitable grammatical composition for words in English, it left them in English. So you get a weird soup of English and Portuguese in one document. On the bright side, it did a very good job on some of the self-referent verbal tenses in Portuguese that had no equivalent in English. Still, to pass the Chinese Room Argument test so to speak, the guy in the room (the program that is) had to make me think without doubt that it understood Portuguese. And, I guess it goes without saying, it couldn't do this by leaving some words untranslated, right? The Altavista translation program is like a poor interlocutor in the Chinese Room Argument. Its reference library, grammar and rules for Portuguese are not robust enough. So, to this English-speaking guy, it seems like the strange guy in the room doesn't understand Portuguese very well. Of course, I admit the program is not meant to be a functioning AI.
Searle proposed this experiment to refute the Turing Test. It has been debated, derided, approved and doubted by all parties interested in AI. The importance of this argument is it draws attention to what the strong AI proponents ignore: a behaviorist approach to artificial intelligence is not only unreliable, but can be misleading. Behavioral assessments of human mental states have been shown to be wrong in many psychological studies of people. Moreover, other elements of the mental state, e.g. neuro-chemical imbalances have been shown to play a determining role in mental processes. A role, which is as important as the observable, outward behavior of human beings. I don't know why a sound model of consciousness that Turing proposed is then purported to be verifiable by such a weak methodology. I agree with a computation model of thinking, but not its method of verification. If we are going to base verification of whether a machine is conscious on whether we can tell it from another person, we haven't verified anything. Why? Well maybe because neuroscientists are still not sure what makes us conscious for God sakes! If this kind of sophistic test is going to suffice, then we might as well let the lawyers tell us when a machine is conscious.
The key notion here (that seems to elude proponents of strong AI) is that the indeterminacy of TT is purported to PROVE an AI is sentient. Something that doesn't exist is used to prove something does exist. Searle is trying to show that a computational model of intelligence is formally flawed, I don't agree. But, my objection is much simpler. The test is flawed. Since there is not a sufficiently powerful real world machine to test this hypothesis, I can't allege it would fail. Granted it that such a machine did exist. Turing himself alleged that if such a machine existed and it was able to trick its human interlocutors into believing it was human in 70% of the cases, this would be sufficient to establish AI. It is surprising a mathematician would write such a thing. Empirical tests are well known in science to need of a rigorous criteria to be efficacious. Statistical testing requires even more rigor and well-defined criteria to affirm the results. Which is why I find Turing's comments in his 1951 paper so strange. Suppose we took a sample of a 100 people with an IQ of say 60 and had them take this test. If 70% of these people believed that they were talking to a real person, when it was machine, would this prove TT? Or would it prove that people of limited natural intelligence could easily be fooled? Or, suppose none of them believed they were having a conversation with person. Would that prove that dim-witted people can reliably assess when they are speaking to a machine? You can think up any number of test cases and find that the results based on Turing's own criteria would be dubious. You see the problem here? When we start trying to use an empirical test that is based on people being able to not reliably make an assessment about a linguistic exchange and furthermore set a statistical criterion to prove an indeterminate result...I mean what? What kind test is that?
As I've asserted the problem with AI today is not its formulation, but the methodology of detecting artificial sentience. Turing's test is ill-founded, and not well-formed in my estimation. We should find out how Turing came to believe such a test could work.
Consider for a moment the logical framework that led to this strange test. To even accept the notion that a machine could become anything like conscious, we have to concede one key element of the idea: thinking is equivalent to computation. In other words, states of mind are computable. To put it in terms of physics, minds are equivalent to discrete state machines.
Machines from the standpoint of physics are anything that can carry out a process. A process in turn is anything that has duration with a definite beginning and end. Discrete is an adjective that describes how a machine conducts its process. DSMs can process information in a very predictable and thus understandable fashion.
Is the human mind a DSM? According to mathematical physics it is. We as human beings are in certain states at any given instance of time. Somewhere between 10*10 and 10*14. Thus, we have a limited storage capacity of information we can process. Our mental phenomena can be validly described as discrete state. We also can be validly described as processing machines. I'm sure there we will be those that cry: That can't be true, there is no way an emotional state of extreme anger, or fear can be codified in discrete calculable steps, let me point out, that very complex phenomena are computational and discrete. Take the exchange of information between a sperm and ovum cell after sexual union. This process is computational, discrete and assailable by the methodology of combinatorial mathematics. This process is the very stuff of life, I remind you. So, it should be no great leap, to recognize that our mental states are computable, (even though the algorithmic processes are not at present known). There are the informed, that refuse to subscribe to this line of reasoning. But isn't that tantamount to being anti-scientific or worst, being plain anti-rationalist. I mean if one maintains the notion that the human mind is somehow not knowable by computation, well what is it knowable by? If you say: uh, well it's not knowable in any conventional way. Then you're being immaterialist, nominalist, or phenomenalist or the worst of the bunch a card-carrying, old-fashioned idealist and that's very, very unhip! If that's not bad enough, you are rowing against the tide of scientific knowledge. I point out there isn't one advanced, educated person in this new century that would be willing to admit to that, right? Well, except maybe the religious, God-fearing camp. I will assume that all the readers of this article follow the strong AI argument for a DSM being descriptive of us. I certainly accept this argument. What then? Naturally, we try to create a DSM. But first we can theorize about the characteristics that this man-made DSM would have. It would have all the potential for mental processes as we do. With enough time and storage space, this artificial DSM should be able to process all the mental states we do. If that's the case, then it is inevitable that we ask: How can we know that our little Frankenstein is really having the mental states we call being conscious? And there is where Mr. Turing enters stage left of course. He thought he had the answer to this question with his test he dubbed the imitation game.
Well I've been thinking of a response to Turing's little test. It's this simple: to turn his proposal around. Is it possible to construction a test such that the AI would be able to discern when it was conversing with a flesh and blood person or not? A kind of reversed TT, if you will? Instead of testing the machine let the machine test us, and see if it could successfully determine when it was kindred spirit or animate matter. I know this sounds a little jocular, but I'm serious. If any sufficiently complex DSM could tell, when it's not for instance me putting questions to it, but a highly programmed, rule-based artificial contrivance, this would convince me of its inward consciousness. Why? Well to answer that gets at what , mathematicians and philosophers (like Searle) have questioned about TT. Roger Penrose, the English mathematician-physicist is an example. He feels there is something indefinable about the underlying substance of human thought: something that makes us like Goedel's IT proof. For instance, suppose our AI is busy exchanging dialog with an interlocutor on the other side, and suddenly in the middle of the Q&A exchange, the guy (or the machine) on the other side writes: Oh forget it and stops responding, Would the AI then conclude this has to be a person, no machine would just end the dialog? Would it be right? Some very smart programmer could have put in a line of code that says: at point x, stop answering all questions, state a remark and log off. We would keep the machine from examining any source. Without knowledge of its interlocutor how could the AI know? It couldn't. It would have to show some behavior that would be so unpredictable; that we would be forced to conclude it was conscious. That behavior would be by definition undefined, pure Goedel, right? I am not sure on this score, but it seems that a test that put the AI in the decision-making seat would be more telling of its achieving consciousness, than one that let us decide. Perhaps we should not have a Turing Test, but an Anti-Turing-Test? It's easy to say: Well all the machine would have to do is ask a person calculate PI to a trillion digits. No, no, no we would restrict questions to things that gave neither, the machine or the person an advantage. That's how Turing conceived it originally. And a final addition to the Anti-TT would be, the AI would have to guess right 99% of the time. If an AI could pass this test, well I would concede we have a real AI test of who is human as you or I.
Review of John Searle's new book Rebuttal to Searle
Return to Portal Philosophies, Science, Mathematics, and Music