February 16, 2006

HARDLY SEEMS FAIR TO CRITICIZE HIM FOR FAULTS ENDEMIC TO SCIENCE:

The Trouble with the Turing Test (Mark Halpern, Winter 2006, New Atlantis)

The part that has seized our imagination, to the point where thousands who have never seen the paper nevertheless clearly remember it, is Turing’s proposed test for determining whether a computer is thinking—an experiment he calls the Imitation Game, but which is now known as the Turing Test.

The Test calls for an interrogator to question a hidden entity, which is either a computer or another human being. The questioner must then decide, based solely on the hidden entity’s answers, whether he had been interrogating a man or a machine. If the interrogator cannot distinguish computers from humans any better than he can distinguish, say, men from women by the same means of interrogation, then we have no good reason to deny that the computer that deceived him was thinking. And the only way a computer could imitate a human being that successfully, Turing implies, would be to actually think like a human being.

Turing’s thought experiment was simple and powerful, but problematic from the start. Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking—he simply asserts it. Some of his defenders have tried to supply the underpinning that Turing himself apparently thought unnecessary by arguing that the Test merely asks us to judge the unseen entity in the same way we regularly judge our fellow humans: if they answer our questions in a reasonable way, we say they’re thinking. Why not apply the same criterion to other, non-human entities that might also think?

But this defense fails, because we do not really judge our fellow humans as thinking beings based on how they answer our questions—we generally accept any human being on sight and without question as a thinking being, just as we distinguish a man from a woman on sight. A conversation may allow us to judge the quality or depth of another’s thought, but not whether he is a thinking being at all; his membership in the species Homo sapiens settles that question—or rather, prevents it from even arising. If such a person’s words were incoherent, we might judge him to be stupid, injured, drugged, or drunk. If his responses seemed like nothing more than reshufflings and echoes of the words we had addressed to him, or if they seemed to parry or evade our questions rather than address them, we might conclude that he was not acting in good faith, or that he was gravely brain-damaged and thus accidentally deprived of his birthright ability to think.

Perhaps our automatic attribution of thinking ability to anyone who is visibly human is deplorably superficial, lacking in philosophic or scientific rigor. But for better or worse, that is what we do, and our concept of thinking being is tightly bound up, first, with human appearance, and then with coherence of response. If we are to credit some non-human entity with thinking, that entity had better respond in such a way as to make us see it, in our mind’s eye, as a human being. And Turing, to his credit, accepted that criterion.

Turing expressed his judgment that computers can think in the form of a prediction: namely, that the general public of fifty years hence will have no qualms about using “thinking” to describe what computers do.

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Note that Turing bases that prediction not on an expectation that the computer will perform any notable mathematical, scientific, or logical feat, such as playing grandmaster-level chess or proving mathematical theorems, but on the expectation that it will be able, within two generations or so, to carry on a sustained question-and-answer exchange well enough to leave most people, most of the time, unable to distinguish it from a human being.

And what Turing grasped better than most of his followers is that the characteristic sign of the ability to think is not giving correct answers, but responsive ones—replies that show an understanding of the remarks that prompted them. If we are to regard an interlocutor as a thinking being, his responses need to be autonomous; to think is to think for yourself. The belief that a hidden entity is thinking depends heavily on the words he addresses to us being not re-hashings of the words we just said to him, but words we did not use or think of ourselves—words that are not derivative but original. By this criterion, no computer, however sophisticated, has come anywhere near real thinking.


Posted by Orrin Judd at February 16, 2006 11:29 AM
Comments

There are lots of problems with Turing's conception of consciousness. Here's one of mine: It is a necessary characteristic of Turing machines that any Turing machine can mimic any other Turing machine. Therefore, if any Turing machine is capable of consciousness, then all Turing machines are capable of consciousness.

My PC is a Turing machine. Should I have a crisis of conscience every time I turn it off?

Posted by: David Cohen at February 16, 2006 11:47 AM

How is your PC a Turing machine by that definition?

Posted by: oj at February 16, 2006 11:52 AM

I've always thought that blogs & their comments would make a pretty good Turing test. And I'm pretty sure I'm not the first--for example, the "bart" program that regularly appeared here for a while followed a fairly simple algorithm:
1) Read in a post from the main site
2) Look up whichever nationality and/or ethnicity was mentioned in the post in a library of standard stereotypes and slurs
3) Post "personal anecdote" (taken from lookup table) about visit to region inhabited by the mentioned people.
4) Add stereotypes and slurs, and note that said group of people should all be killed.

Posted by: b at February 16, 2006 12:07 PM

Oops, my bad. "Turing Machine" is a name for a type of machine that follows a series of orders one after another. All digital computers are Turing machines. It has nothing to do with whether the computer, in following those orders, can pass the Turing Test.

Posted by: David Cohen at February 16, 2006 12:08 PM

b: But who was fooled?

Posted by: David Cohen at February 16, 2006 12:12 PM

David: It was obviously a very elementary program, hence its withdrawal by the programmer. Presumably its purpose at this site was to acquire information for tuning the next generation...

Posted by: b at February 16, 2006 12:20 PM

Mr. Cohen;

A Turing Machine is a computational device with a very specific structure. For instance, an automatic loom is a machine that follows instructions, but it is not a Turing Machine. What makes a TM interesting is that every computation we know how to do can be done on a TM. In the field, "computable" really means "computable by a TM". This may change with quantum computers, but currently we cannot build any device that compute something that a TM can not.

As for turning off your PC, would you be concerned about breaking a dummy even if it had the same chemical composition as a human?

Posted by: Annoying Old Guy at February 16, 2006 12:22 PM

For a rationalist/materialist what's the difference between a dummy and a human of identical chemical composition?

Posted by: oj at February 16, 2006 12:29 PM

One of my kids did a science project last year on the Turing Test, using a "chatbot" program written specifically written to mimic human conversation. (The link is to the website of the author of one of the chatbots we used; there's some fascinating stuff there, and the gentleman was quite helphul and encouraging to my son.) The chatbot is a delightful little toy, but it's a long way from being able to hold its own for more than a few minutes at a time--though, as a "learning" program, it gets better the more you use it.

Posted by: Mike Morley at February 16, 2006 12:35 PM

David:

Well, if we're going to be picky, a Turing machine has unlimited memory, so your PC is merely a Finite State Machine (FSM). Even if the Weak AI Hypothesis (that a Turing machine can be intellegent) is true, it may be that a PC lacks the capacity.

Posted by: Mike Earl at February 16, 2006 12:37 PM

Another problem with the Turing test is that some human beings are easier to imitate than others. I have relatives whose mental processes can be 96% emulated under MS-DOS by setting the prompt to "HUH>". (And, the town dump refused to take both my relatives and my old 386.) But somehow I don't feel that they're at all equivalent.

Posted by: Bob Hawkins at February 16, 2006 12:39 PM

AOG:

Actually, interestingly - and sorry to everyone for getting pedantic here about one of my favored branches of mathematics - Quantum computers (at least of the normal sort) can't actually solve any problems a Turing machine can't.

Now, to be fair, what you said is true in practice - there is a class of interesting problems that are impractically slow on Turing machines, and fast on Quantum computers (eg, breaking common codes). But any problem solvable by a quantum computer is solvable in theory on a turing machine, and problems unsolvable on turing machines (eg, the Halting Problem) cannot be solved by Quantum computers, either.

Posted by: Mike Earl at February 16, 2006 12:50 PM

b: Heh. Yeah, I was always fascinated with how Bart ALWAYS had a personal anecdote that defined for him how everything should viewed.

Posted by: RC at February 16, 2006 12:54 PM
For a rationalist/materialist what's the difference between a dummy and a human of identical chemical composition?
Information content. But as far as I know, Mr. Cohen isn't a rationalist/materialist, making the question and answer moot.

Posted by: Annoying Old Guy at February 16, 2006 2:17 PM

Mr. Earl;

That is my view as well (because current quantum designs are just non-deterministic finite state automata) but there are serious researchers who claim that quantum computations can be more powerful than a Turing Machine (Roger Penrose being one). Whether that claim is true is yet to be determined. We simply do not understand enough about QCs to be able to say definitely at this time.

P.S. Do you think if we discussed P and NP problems and their relationship to TMs and QCs long enough, we could make OJ's head explode?

Posted by: Annoying Old Guy at February 16, 2006 2:24 PM

Mmmm, a non-deterministic finite state automaton sounds like a contradiction in terms. Got a link, AOG? Don't know much about QCs but they sound interesting.

Posted by: joe shropshire at February 16, 2006 3:03 PM

Do you think if we discussed P and NP problems and their relationship to TMs and QCs long enough, we could make OJ's head explode?

You've already got mine spinning.

Posted by: Mike Morley at February 16, 2006 3:21 PM

Joe:

Here's an unnecessarily technical wikipedia article.

Essentially, a FSM is just a flowchart, and a nondeterministic one can make decisions based not only on inputs but on coin-flips (or die rolls, if you like).

Posted by: Mike Earl at February 16, 2006 3:44 PM

Non-deterministic finite state automata. Basically, a NFA is a machine that can be in an arbitrary set of states at the same time. It's very similar to the multi-worlds theory, in that an NFA never makes a choice, instead it does every possible state transition at the same time. You can think of an NFA as a deterministc finite state automata that "splits" into multiple copies whenever a program branch is encountered.

Current efforts in quantum computing are trying replicate this ability using quantum superposition. Mike Earl is correct that such a QC is not fundamentally more powerful than a TM, just faster. It is simply my personal preference to be a bit more cautious in making assertions about the ultimate results of new fields of science and technology. There is some speculation about "hypercomputation" but none that is known to be physically realizable, even approximately (as your computer is an approximation of a TM).

Posted by: Annoying Old Guy at February 16, 2006 3:45 PM

AOG:

Heh - but Penrose is an interesting case. He's trying to answer OJ's objection: "Materialism implies that people are automata" by positing that people are some sort of magical quantum super-turing-machine.

Frankly, I don't buy it, but I'm not sure I understand it, either...

Posted by: Mike Earl at February 16, 2006 3:47 PM

AOG:

If the chemical composition is identical why wouldn't the information content be?

Posted by: oj at February 16, 2006 3:50 PM

Cool, thanks Mike.

Posted by: joe shropshire at February 16, 2006 3:55 PM

Emperor's New Mind was a decent book, but at bottom turned out to be just a lot of handwaving because he couldn't stand the conclusion logic was forcing on him.

Posted by: David Cohen at February 16, 2006 3:58 PM

Of course the Turing Test can establish that some thinking is going on. The question is who is doing the thinking. If a computer system passes the Turing Test, that might mean that the programmer was doing the thinking.

Posted by: Joseph Hertzlinger at February 16, 2006 5:12 PM

Geez, a set up like that combined with all the grief I'm taking today from our resident programmers, and still I won't rise to the bait. I'm a Saint; a Saint, I tell ya.

Posted by: David Cohen at February 16, 2006 5:25 PM

Joseph:

Note that humans pass the Turing Test.

Posted by: oj at February 16, 2006 5:35 PM
If the chemical composition is identical why wouldn't the information content be?
No. Try giving your wife a pile of graphite instead of a diamond with the explanation "the chemical content is the same".

Here's another way to think of it. Take the hard disk in your computer out and run an industrial strength magnet over it. You haven't changed the chemical composition, but do you think the information content is the same as it was?

Actually, OJ, you should really study information theory. It would fit your theology quite nicely. Just think of "The Word" as "information" and you'd be set. What's the difference between a pile of chemicals and a human? The breath of life (information) breathed in to it by The Word. Best of all, quantum information can't be duplicated, making every person truly unique.

P.S. I dropped a comment here earlier with some links, but it got eaten by the spam protection. For non-TM computation, netsearch for "hypercomputation". Some speculative theory, but nothing physically realizable, even in approximation (as your computer is an approximation of a TM).

Non-deterministic finite state automata are like the "many worlds" theories. Whenever a decision has to be made, the NFA "splits" and makes all choices simultaneously. QCs use quantum superposition to do the same thing, which is why QCs and NFAs are so similar.

Posted by: Annoying Old Guy at February 16, 2006 8:59 PM

AOG:

Sure you can fiddle the electro-magnetics a bit but assume they're identical?

Posted by: oj at February 16, 2006 9:06 PM

(Hrm, apparently AOG's definition for NFA is the standard one thes days, and the ones I was thinking of are called something else, but that's neither here nor there...)

AOG:

Well, yes, I was assuming that Quantum computers would just be a physical implementation of (currently theoretical) NFAs. If they turn out to be something else entirely... but the toy ones we have now work that way.

Information theory is fine stuff. I have an outline for an essay using it to argue that media bias is inescapable (because any news report is lossy compression, which implies a world-model, which implies a political viewpoint...)

Posted by: Mike Earl at February 16, 2006 9:39 PM

Oh, God, it's lex. Thanks AOG, that will do for me.

Posted by: joe shropshire at February 17, 2006 12:11 AM
Sure you can fiddle the electro-magnetics a bit but assume they're identical?
I have no idea what you are asking here. Your hard disk's information content is essentially the result of fiddling the electro-magnetics a bit. It that information content, that fiddling of the electromagnetics, that distinguishes a disk capable of making your computer work and one that isn't. I will re-iterate because this is the key point - it is purely and solely the arrangement of those electromagnetic fields that distinguishes a "live" disk from a "dead" one. Not the materials in the disk, but purely the arrangement of them.

In exactly the same way, it is the arrangement of the materials, the information of that arrangement, that distinguishes a human being from a pile of equivalent chemicals. If I were to disrupt that arrangement by, say, dropping you in an industrial shredder (as all materialists secretly long to do to believers), then you'd turn in a pile of bio-chemicals instead of a person despite the lack of change in your chemical composition.

Posted by: Annoying Old Guy at February 17, 2006 12:53 AM

But rearrange the materials so they're identical again and you've got me, no? That's the materialist version, right?

Posted by: oj at February 17, 2006 12:58 AM

You can't duplicate the arrangement because you cannot observe the relationships precisely enough without affecting what is observed.

Posted by: ted welter at February 17, 2006 10:00 AM

Possibly. Quantum theory may prevent such a re-arrangement.

However, your original question was how a pile of chemicals differed from a human being in the materialist view. Hopefully I've answered that one.

Posted by: Annoying Old Guy at February 17, 2006 11:01 AM

No. You haven't.

Posted by: oj at February 17, 2006 11:14 AM

ted:

Observation though is immaterial.

Posted by: oj at February 17, 2006 11:18 AM

i often wonder, what changes, as an individual passes from living to non-living, like a geni being released from it's bottle.

Posted by: toe at February 17, 2006 12:15 PM
« THE PUBLIC'S RIGHT TO GHOULISH SPECTACLE: | Main | WHICH SHOOTS DOWN EVERY RUMOR ON THE LEFT: »