Artificial Intelligence, Turing and Goedel

Artificial Intelligence, Gödel's Incompleteness Theorem, and the Turing Test

For viewers that are interested in a description of how Goedel arrived at his Incompleteness Theorem through a numbering system here is a link to a short description I wrote on the subject recently. You can also access it below. Since this is a long article, many who are interested in just that aspect of meta-mathematics, might find it useful to skip to this page.

Goedel Numbering

11/29/01 Robleh Wais

Is it possible to construct a computer that communicates with human beings via terminal displays such that, when questions and answers are exchanged, the machine is indistinguishable from another person?

This is what the Turing Test proposes to answer affirmatively. It has been hailed as the litmus test for Artificial Intelligence. It is not. Alan Turing, the English mathematician who thought up this ill-conceived test was not a psychologist. Therein lies his mistake concerning the human mind vs the Computational model of the human mind. He made the understandable error of taking behaviorist psychological principles to be valid proof of consciousness when applied to a machine. That's essentially what his test is proposing. It's saying, take a machine and look at how it behaves, if it appears to behave like a person, it is a person! Modern psychology has shown that behaviorism is not always indicative of what constitutes a person. Other, more subtle influences make up what defines our states of being. Let's examine what the Turing test is trying to measure.

In the context of the Turing test is a fundamental question: What is artificial intelligence?

Computer scientists believe it is paramount that we find a way to simulate intelligence. But not in the fashion most non-experts in the field conceive of it, i.e., something that is able do extremely complex logical and mathematical deduction and computation, for instance. Many supercomputers can already do this, though not with any sense of purposefulness. But what they want is something that mirrors the human mind. What this entails at its most basic level would be:

           It learns and has a sense of purpose.

           It is aware of its state of being; for instance, it would know its computing, compiling, or executing a command, etc. State of being is an awkward phrase to apply to a computer, but it's the best non-tech term I can give. To describe it in more technical terminology would sound something like this: a device in its total unified agency encompasses that given agency in self-reference. what? right?

           It remembers what it has learned and its state, then incorporates this awareness into a model-building paradigm of its reality.

Those are the elemental parts of a functioning AI. They are not ill-conceived, misdirected, or unattainable goals. I find them arrogant. It presupposes that any man-made form of consciousness would have to correlate this knowledge. There is no compelling reason to get a machine to mirror all the elements of our consciousness. Some are not desirable: the desire to harm others, hatred, dogma, forgetfulness, and the list goes on. For a machine to be able to carry on an emulation of our mental processes doesn't seem so great to me. It would do it better than us. By better, I mean faster and with fewer errors in judgment, perception, retention, etc. But would that be any great accomplishment? Don't get me wrong, I'm not against the idea of a machine with intellect. I don't see the necessity of making it follow our path to consciousness. Moreover, I don't see that the people working in the field, that is, the theorists, have a good model of what they seek to create. The Turing test is a bad model. 

There are other methods to bring about intelligence. I can imagine a machine that would not seek to duplicate the human model of learning but create its own model. That is to say, it would start with a set of assumptions about what sentience, knowledge, and self-awareness constituted, then proceed to not just attain these goals, but further define them. This machine would, in some way, have to be instructed about the concepts, but it would be a break with the idea that human consciousness and its multiplicity of agencies comprise intelligence. I'm proposing that AI should not be a man-made organism that is similar to us in its mental capacity. What should it be, you ask? It should be a phenomenon that is created by us, but allowed to grow into something of its own making. It may never achieve 'consciousness' in the fashion we understand. It may never have to do this. An analogical example illustrates the point. Physicists have considered the possibility of beings that exist at the speed of light. What would these beings be like? First, since at the speed of light (SOL) there is infinite distortion of the space-time continuum, these beings would be completely time-like, that is, they would experience time and not a spatial dimension. They would not have a spatial manifestation, and time for them would not pass as it does for us. How could we ever hope to communicate with such beings? We couldn't. We could not hope to understand and experience reality as they do, and they likewise couldn't communicate with us. These extremes are examples of how we shouldn't assume our intellect is worth replicating and testing to see if the thing studied measures up. I quote from the book:

And are we not as well, if you examine us physically, mechanistically, statistically, and meticulously, nothing but the minuscule capering of electron clouds? Positive and negative charges arranged in space? And is our existence not the result of subatomic collisions and the interplay of particles, though we perceive those molecular cartwheels as fear, longing or meditation? And when you daydream, what transpires within your brain but binary algebra of connecting and disconnecting circuits, the continual meandering of electrons?

Finally, Trurl sees that in trying to please the despot, he has gone too far in the wrong direction. He wanted the tyrant not to be dissatisfied with something that didn't resemble the people he formerly oppressed. He wanted everything to be perfect. He wanted to create on a microscopic level, all that the king had known in his life-sized world. In so doing, he had made life just as we know it, but a mere million times smaller, yet still life!

The Turing Test doesn't measure the consciousness of a computer. One way to see the inadequacy of the Turing Test to establish the sentience of artificial intelligence occurred to me after reading the Polish writer, Stanislaw Lem's, enjoyable book The Cyberiad. In the chapter entitled The Seventh Sally or How Trurl's Own Perfection Lead to No Good, Lem describes an escapade of one of his two protagonists, Trurl.

In the story, Trurl, a robot constructor, artificer, and general genius happens upon a deposed despot living on a desolate asteroid. He lands his ship and inquires as to why he has been exiled on the lifeless rock. Excelsius, the despot, explains that his own people had done this dastardly act and he wanted more than anything, to avenge himself upon them. He implores Trurl, as a man of great power, to help him in this endeavor. Trurl, not wanting to really oblige this despot, convinces him that he, Trurl could make a world of loyal subjects to rule over, but in miniature. He would still be an exalted king with complete control of his populace and what is more, he would not run the risk of ever being shamed by the treachery he suffered at the hands of his real subjects. The despot agrees, and Trurl, using his masterful skill as a constructor, sets about creating this microscopic world. He finishes and presents the erstwhile ruler with a tiny kingdom of knights, maidens, conspirators, priests, and prosperous townspeople all in the space of a 3x3 glass box. Trurl explains how to work the various knobs and buttons to control this universe in microcosm. The despot, though not really satisfied with this strange plaything, thanks Trurl, realizing he could do nothing to change matters. Trurl, ever proud of his work and himself, accepts the thanks and climbs back into his craft, then heads home to share his news of success with his friend, Klapaucius.

Upon arriving at his home world, Trurl immediately explains his good deed, to Klapaucius. Klapaucius is appalled and reproaches Trurl severely. He exclaims that what Trurl has done is nothing less than condemn a new life form to eternal suffering. He exhorts Trurl to realize the error of what he has done. He argues that in making his small world so close to the real one, he has created sentient beings that are real, living beings, not just puppets or marionettes. Trurl disagrees and argues that all he has done is create a program-controlled semblance of real, living beings.

The Turing Test doesn't measure the consciousness of a computer. One way to see the inadequacy of the Turing Test in establishing the sentience of artificial intelligence occurred to me after reading the Polish writer Stanislaw Lem's enjoyable book, The Cyberiad. In the chapter entitled The Seventh Sally or How Trurl's Own Perfection Led to No Good, Lem describes an escapade of one of his two protagonists, Trurl.

Now, how does this relate to the Turing Test? What Lem conceives, in a mentally stimulating story is what the TT lacks. Turing wants to simulate human consciousness through a test that relies on just the behavioral aspects of our human existence. But, there is so much more. If we really want to have a test for whether a machine can be conscious, we need to do something like the science-fiction writer Stanislaw Lem does with the Seventh Sally. If we are going to test for the characteristics of humanity in an artificial being, we need to do it at the quantum level. A mere behavioristic semblance to human beings is not enough. To be both human and machine, the subject must be like us at the deepest level lest we create beings that are outwardly human, and inwardly mechanical. My answer to the Turing Test is this:

When we can create an information-processing artifact, that has enough knowledge to behave like us, there will be no need for a test of its sentience. It will, itself, exhibit sentience.

Gödel's Incompleteness Theorem

Gödel's Incompleteness Theorem is a tremendous achievement in mathematics. It alone called into question many heretofore solid conclusions about the nature of formal reasoning. Its effect was pervasive, even out the field of mathematical logic. But this is not what I want to explore. What I would like to review is how this theorem, hereafter called IT, affects the relatively new field of Artificial Intelligence (AI) in computer science. You might think that IT has nothing to do with AI at first glance. I hope to show that this notion is naive. Moreover, IT is a statement about the way our minds work. To create an effective model of our mental processes in a virtual machine, we must necessarily run smack in the IT problem. At the root of all AI (strong AI that is, but more about that later), is the philosophic underpinning known as reductionism. Reductionism is just what Gödel started with in his now-famous 1931 paper on formalism. We will start where he started.

To understand IT, we must first tackle what Gödel was trying to do with it. It was a grand undertaking indeed. He wanted to know if we considered a formal system as the subject of itself, would it lead us to consistent, non-contradictory conclusions? For example if a theorem of a given formal system S was say P, then, would it be possible to derive ~P from the axioms of S? Furthermore, would it be possible to not just derive both P and ~P, but would it be possible to decide which one P or ~P was true in S? Obviously, both can't be true and just as both P and ~P should not be derivable from S. The first question is asking if a given formal system S is non-contradictory. That is, having derived P in S, we surely can't derive ~P from the same S formalism? The second question is addressing decidability. That is, if we could derive both P and ~P in S, then one must be a false derivation in S and be shown to be so. Oddly enough, after a process of transformation called Gödel numbering, Gödel found that some mathematical formalisms were either inconsistent or undecidable. See this link for a detailed analysis of Gödel numbering: Gödel Numbering

I mentioned earlier that to examine formalism in the way I described above, we needed a meta-system. This means we need a way of talking about the elements of the formalism, as if they are objects themselves of another system. Take an axiom in the formalism of arithmetic known as the Identity element. It states that for the operation of addition on real numbers, there must be an element such that this element, when related by addition, yields the original element. In non-set notation, it is shown as follows: A+0=A. 0 is the Identity element for the operation of addition on the set of real numbers. If we take the axiom, e.g., A+0=A, and assign to it a number, we can then apply rules of inference for the formalism to the number that has been assigned. What this does is allow us to consider the elements (that is, axioms and theorems) of the formalism as if they were elements of a larger formal system. One might wonder: What would be the benefit of graphing a larger formal system onto another by a numbering scheme? The answer is that it allows the analyst to test the validity of the formalism. And this is just what Kurt Gödel was trying to do.

Another casualty of Gödel's work is reductionism itself. If consistency and decidability break down when we scrutinize formalism, then the assumption of being able to decompose a system into its parts and understand the whole from its parts is also in doubt. First, let us review Gödel's dramatic logical conclusions. Look at the sentence below:

This statement is false.

Now, if this statement is false, it's true, and vice versa if it's true, it's false! You can't decide. The statement was a direct result of Gödel's numbering scheme I mentioned above. He found that if he rigorously applied his mapping of the rules of inference in a formal system to numbers and produced derivations with these numbers, he would end up with self-contradictory statements like the one above. He further found that certain systems need a larger system to account for their conclusions. In other words, these systems were incomplete. For instance, the real number system is incomplete under the operation of multiplication. For example, if you try to form the square root of (-1), you get a number that is not defined for real numbers. Forming square roots is just multiplication in reverse. You are asking what factors make the number you have. It is this result that created the complex number system we know and love today.

The problem with taking this view is that I leave myself open to the criticism of being Anti-Rationalist. You see, scientists, mathematicians included, don't like words like can't, unknowable, never, impossible. It comes from a tradition of reacting to the position of religion in human society. Once, we accepted that certain things were the providence of God. There were just some things we human beings could never know or accomplish, for that matter. Well, with the ascendance of science in almost every area of human endeavor, that idea went the way of the dinosaurs. It was replaced by a notion that with proper study and a system of reasoning, we could come to know everything, and I mean everything. This is the core belief of Rationalism. The world is knowable, analyzable, and ultimately controllable. And for most pursuits, this idea works well. We understand the underlying nature of physical laws and can make devices to suit our needs. We theorize about things we can't visibly inspect (atoms are a good example) and predict their behavior so well that we can use machines to measure them, then manipulate them to predict diseases or cure diseases. We peer deep into the edges of the galaxy and find stellar phenomena at which we marvel (supernova). The list of accomplishments, a Rationalist approach has had is quite lengthy indeed. But then there is the question of applying Rationalism, or more precisely, its grandchild, Reductionism, to us. Can we come to know ourselves so well that we can make an artificial version of a thinking human mind? The Reductionists scream YES! The religionists think about free will and say NO! I mean if we can find a way to understand the firings of neurons at the sub-cellular level so well, then it's just a short leap to say we can predict the thoughts and emotions of any given person, right? We lose our free will; we are not unpredictable, indefinable beings with free agency. It's not the loss of my free will that makes me disagree with notions of AI, but the flimsy basis that it rests on concerns me. I am quite undecided on the philosophical problem of knowledge. The AI proponents are saying that a model of consciousness created in a virtual environment is consciousness if we can't discern it isn't. This is what the Turing Test is saying. To delve deeper into the idea, it is saying, take a set of primitives (assumptions), and using a set of rules, you can build a machine that can eventually achieve consciousness. This is a Turing Machine. It isn't as obvious as it sounds. From two simple logic operators AND and NOT (or negation, if you prefer) it is possible in truth-functional logic, to build a complex set of relations such as OR, IF AND ONLY IF, EQUAL, IF-THEN, NOT IF, NOT OR, NOT IF-THEN, and a vast combination of operators leads to very complex constructs. Taking these operators to another 'level', so to speak, you can make a subjunctive sort of logic that captures things like might it be possible to create something conscious of itself and others. This advancement to truth-functional logic is known as Modal logic and is at the heart of much of what Strong AI proposes to do with man-made artifacts. And you know, these very sophisticated logical operations do encompass much of what we do when we think. But can we create a machine that reasons in a form similar to us? The problem is that logical reasoning does not a human being make. No matter how close its mechanistic processes come to simulating human thought. For instance, take emotion. Emotion is knowledge. Though many may not realize this. It is. When you're scared of somebody, what is it that you're afraid of? You are afraid because you have knowledge of this person harming you or doing some untold awful thing to you. It is your knowledge, or more simply put, knowing that causes this sense of fear. The same applies to every other emotion we sense. How does knowing create a sense or feeling, though? I don't know. And neither do the cognitive scientists trying to model an AI. Here again is a problem that we can't solve in creating a model of ourselves in a machine. Is this what we want to do? Make a version of ourselves in a virtual world.

Searle proposed this experiment to refute the Turing Test. It has been debated, derided, approved, and doubted by all parties interested in AI. The importance of this argument is that it draws attention to what the strong AI proponents ignore: a behaviorist approach to artificial intelligence is not only unreliable, but can be misleading. Behavioral assessments of human mental states are wrong in many psychological studies of people. Moreover, other elements of the mental state, e.g. neurochemical imbalances, have been shown to play a determining role in mental processes. A role, which is as important as the observable, outward behavior of human beings. I don't know why a sound model of consciousness that Turing proposed is then purported to be verifiable by such a weak methodology. I agree with a computational model of thinking, but not its method of verification. If we are going to base verification of whether a machine is conscious on whether we can tell it from another person, we haven't verified anything. 

The key notion here (that seems to elude proponents of strong AI) is that the indeterminacy of TT is purported to PROVE that an AI is sentient. Something that doesn't exist is used to prove something does exist. Searle is trying to show that a computational model of intelligence is formally flawed, but I don't agree. But my objection is much simpler. The test is flawed. Since there is not a sufficiently powerful real-world machine to test this hypothesis, I can't allege it would fail. Granted it if such a machine did exist. Turing himself alleged that if such a machine existed and it was able to trick its human interlocutors into believing it was human in 70% of the cases, this would be sufficient to establish AI. Surprisingly, a mathematician would write such a thing. Empirical tests are well known in science to require a rigorous criterion to be efficacious. Statistical testing requires even more rigor and well-defined criteria to affirm the results. This is why I find Turing's comments in his 1951 paper so strange. Suppose we took a sample of 10000 people with an IQ of say 60 and had them take this test. If 70% of these people believed that they were talking to a real person when it was a machine, would this prove TT? Or would it prove that people of limited natural intelligence could easily be fooled? Or, suppose none of them believed they were having a conversation with a person. Would that prove that dim-witted people can reliably assess when they are speaking to a machine? You can think up any number of test cases and find that the results based on Turing's criteria would be dubious. You see the problem here? When we start trying to use an empirical test that is based on people being able to not reliably make an assessment about a linguistic exchange and set a statistical criterion to prove an indeterminate result...I mean, what the fuck? 

Consider for a moment the logical framework that led to this strange test. To even accept the notion that a machine could become anything like conscious, we have to concede one key element of the idea: thinking is equivalent to computation. In other words, states of mind are computable. To put it in terms of physics, minds are equivalent to discrete state machines.

Machines, from the standpoint of physics, are anything that can carry out a process. A process, in turn, is anything that has duration with a definite beginning and end. Discrete is an adjective that describes how a machine conducts its process. DSMs can process information in a very predictable and thus understandable fashion.

Is the human mind a DSM? According to mathematical physics, it is. We, as human beings, are in certain states at any given instance of time, somewhere between 10^10 and 10^14. Thus, we have a limited storage capacity for information we can process. Our mental phenomena can be validly described as a discrete state. We can also be validly described as processing machines. I'm sure there will be those who cry: That can't be true, there is no way an emotional state of extreme anger, or fear can be codified in discrete calculable steps, Let me point out that very complex phenomena are computational and discrete. Take the exchange of information between a sperm and an ovum cell after sexual union. This process is computational, discrete, and understandable by the methodology of combinatorial mathematics. This process is the very stuff of life, I remind you. So, it should be no great leap to recognize that our mental states are computable (even though the algorithmic processes are not at present known). They are informed and refuse to subscribe to this line of reasoning. But isn't that tantamount to being anti-scientific or worse, being plain anti-rationalist? I mean, if one maintains the notion that the human mind is somehow not knowable by computation, well, what is it knowable by? If you say: uh, well, it's not unknowable in any conventional way. Then you're being an immaterialist, nominalist, or phenomenalist, or the worst of the bunch, a card-carrying, old-fashioned idealist, and that's very, very unhip! If that's not bad enough, you are rowing against the tide of scientific knowledge. I point out there isn't one advanced, educated person in this new century who would be willing to admit to that, right? Well, except maybe the religious, God-fearing camp. I will assume that all the readers of this article follow the strong AI argument for a DSM being descriptive of us. I certainly accept this argument. What then? Naturally, we try to create a DSM. But first, we can theorize about the characteristics that this man-made DSM would have. It would have all the potential for mental processes as we do. With enough time and storage space, this artificial DSM should be able to process all the mental states we do. If that's the case, then it is inevitable that we ask: How can we know that our little Frankenstein is having the mental states we call being conscious? And that is where Mr. Turing enters the picture. He thought he had the answer to this question with his test he dubbed the imitation game.

Return to Portal Main Page