Please log in.
Before you can vote, you need to register. Please log in or create an account.
Culture: Automaton
Improved Turing Test   (+2, -2)  [vote for, against]
How to tell a Borg from a Cyborg

As most sentient beings know, the Turing test was proposed by Alan Turing as a means to determine a machine's ability to exhibit intelligent behavior. A human judge remotely engages in a natural language conversation with a machine, which tries to appear human. If the judge cannot reliably tell the machine from a human, the machine is said to have passed the test.

With all the smart machines around now (some of which could possibly pass the above test, but which are far from sentient), I think we need a more interesting Turing test. (First, a definition - Sentience: the ability to feel, or perceive, or be conscious, or have subjective experiences.) I propose this: A human judge remotely engages in a natural language conversation with a machine, which tries to appear human.* If the judge is convinced that the machine BELIEVES it is a sentient being, the machine is said to have passed the test.

* This should be amended to: tries to appear sentient - not necessarily human.
-- sqeaketh the wheel, Jun 10 2011

Britney Spears' Guide to Semiconductor Physics http://britneyspears.ac/lasers.htm
[sqeaketh the wheel, Jun 10 2011]

Chinese Room http://en.wikipedia.org/wiki/Chinese_room
Why the original Turing test is stupid. [DIYMatt, Jun 11 2011]

Run, [8th]! Run!
-- Grogster, Jun 10 2011


Cut the chit-chat. Shoot 3 of the buggers. If the 3rd one gets up its a Borg.
-- gnomethang, Jun 10 2011


So why is that any better than the previous test? Surely in both tests, it's all down to the quality of your judge's subjective powers for recognising sentience, isn't it? Sure, a clever judge in the original Turing test might adopt this approach in deciding whether they give their conversation partner the meat-tick or not, so I accept this as a valid judging strategy, but is it a whole new test? Not really.
-- zen_tom, Jun 10 2011


[zen-T] The proposal is supposed to incorporate the idea that intelligence or consciousness is self-referential (see blowhard books by D Hofstadter). A judge really has no way to determine if a machine IS sentient. The idea is that what matters is not that, but whether the machine THINKS it is. Then, for all practical purposes (FAPP),it is. If you don't follow that, I'm afraid you might be failing the original test. Where's my blaster?
-- sqeaketh the wheel, Jun 10 2011


What if none of us really pass the test, but are just chemical machines? Where's [nineteenthly] to philosophize on this?
-- RayfordSteele, Jun 10 2011


Come on then, [squeak], if you think you're tough enough to take us. We have adaptive shielding, you know.

And [gnome], that shoot-first-and- dodge-questions-later isn't going to get you anywhere (except perhaps rapid promotion within the Metropolitan Police).

The test fails at the simplest challenge; what if the testee truly believes it is intelligent and sentient, but is in fact dumb as a plank, like- for instance- Britney Spears?
-- 8th of 7, Jun 10 2011


[87] Do you mean you are not familiar with "Britney Spears' Guide to Semiconductor Physics"? <link>
-- sqeaketh the wheel, Jun 10 2011


I can't see how this works at all, let alone how it is in any way better than the original Turing test.

The point of the test is to establish a benchmark for determining a computer's ability to exhibit intelligent behavior. Is sentience a necessary attribute to intelligent behavior? I'm not sure that its presence is an indicator of said behavior. So I think your test is looking for the wrong thing - that is, if it endeavors to achieve the same goal as the original test.

If sentience is a part of intelligence, it is only a part, and I believe it would be a much simpler task to design a computer system that feigns only that part.

Also, in the original test concept, the human judge is engaged with another human as well as a machine, and the goal is to sort out which is which. Do you intend to keep this arrangement?
-- tatterdemalion, Jun 10 2011


//With all the smart machines around now (some of which could possibly pass the above test, but which are far from sentient), I think we need a more interesting Turing test.//

That's cheating.

An aside: there's a guy in my lab who works on the behaviour of nematodes, trying to understand it at the level of individual cells and, eventually, individual molecules. Every time he starts to understand one type of behaviour in this way, it stops being behaviour - people just call it chemotaxis or whatever.

Your proposal is that, since there are machines which can appear human within a limited framework, it's time to make the test harder.

By this reasoning, there will never be a sentient machine, since you will just invent more subtle and idiosyncratic tests to differentiate it from humans.
-- MaxwellBuchanan, Jun 10 2011


//By this reasoning, there will never be a sentient machine//

Wrong. By the new test, there will be new sentient beings recognized, depending on who is judging. That is the point - no test like this can be absolute, even regarding Britney Spears. The test does not replace Turing's test, it is a variant for a different purpose.
-- sqeaketh the wheel, Jun 10 2011


Not an improvement, not even a variant: it's the opposite of what Turing proposed*. It puts the subjective judgements about inaccessible mental states back in.

*Admittedly, that's an improvement if you disagree with Turing.
-- mouseposture, Jun 11 2011


I've always felt that the Turing test is merely an exercise in human arrogance. As though the only measure of intelligence or sentience is the ability to imitate a human.
-- CyberCod, Jun 11 2011


Everybody should go read "On Intelligence" by Jeff Hawkins right now. RIGHT NOW! It's a pretty fascinating book that addresses exactly why this idea is much better than the original (stupid) Turing Test.
-- DIYMatt, Jun 11 2011


The Turing test is a stupid answer because it's a stupid question. Presumably, that was Turing's point.
-- mouseposture, Jun 11 2011


// there are machines which can appear human within a limited framework //

Conversely, there are humans that can't quite manage" human" in any framework whatsoever, even when hanging upside-down from said framework and eating a banana.
-- 8th of 7, Jun 11 2011


// Everybody should go read "On Intelligence" by Jeff Hawkins right now. RIGHT NOW! //

Agree. It is one of the best books out there, and deserves multiple reads. The (unstated) analogies between his ideas on how the mind/brain works and how physics (physics theories) works jumps right out at you. (That is, everything is perceived as a model or through a model building exercise.)
-- sqeaketh the wheel, Jun 11 2011


//I've always felt that the Turing test is merely an exercise in human arrogance. As though the only measure of intelligence or sentience is the ability to imitate a human.//

Notice that my improved test does not require the machine to imitate a human.
-- sqeaketh the wheel, Jun 11 2011


//Cut the chit-chat. Shoot 3 of the buggers. If the 3rd one gets up its a Borg.//

And if it runs before you shoot it, it's Sentient?
-- Ling, Jun 11 2011


Based on the way humans behave, it's sentient if it shoots you first.
-- 8th of 7, Jun 11 2011


//Based on the way humans behave, it's sentient if it shoots you first.//
KETTLE! - Looking a bit black over there my son!
-- gnomethang, Jun 11 2011


... says a self-confessed member of the species that invented such evergreen favoutites as "Isandlwhna", "Custer's Last Stand", "Omaha Beach" and the classic travel movie "Ypres to Passchendaele; six miles, six months, and six hundred thousand dead".
-- 8th of 7, Jun 11 2011



random, halfbakery