2012: Advent Philosophy: What’s so special about being human? Part 1

by on December 5, 2012

Philosophers are very fond of comparing computers to brains. It’s debatable whether this is actually valid*, but it does lead to a lot of debate about what makes humanity human, whether artificial intelligences can ever achieve true consciousness in the same way humans can, and what makes that special in the first place. Lieutenant Kusanagi of Ghost in the Shell puts it much more eloquently:

 So, today I wanted to start with a pretty basic counter-argument to the idea that a true artificial intelligence could ever exist: The Chinese Room.

Imagine that we locked my girlfriend in a bare room. All we gave her was a notebook, pencil, and manual with instructions like “If you see this character followed by this character, write this other character.” Every once in a while, someone dropped a note in Chinese into her room. My girlfriend does not speak Chinese, she would have no idea what the messages meant, but based on the instructions we had given her she would still be able to write down an appropriate answer. She tosses the answer out of the room, the unseen people on the other side are satisfied, at some point they give her another note. (Also maybe some food.)  In this situation, my girlfriend’s ability to give appropriate answers in Chinese does not reflect her ability to understand the information she’s processing.

color perception white room Changing Room Size With Paint Variation

No girlfriends were harmed in the making of this blog.

The Chinese Room is supposed to be an easy explanation of How Computers Work. The manual my girlfriend has is equivalent to the code programmers write for the computer. The code provides information on how to process specific types of information (Chinese characters, say) which are fed into it. As a result, the computer can react to those types of information in ways which we think are appropriate, and it might look like the computer knows Chinese. We can expand the types of information it can react to – we could give my girlfriend equivalent manuals for German, Tagalong, and morse code. This would make her seem even more intelligent**. We might eventually be tricked into thinking that behind the hole in the door, there was a linguistic genius.

But the fact would remain that she still wouldn’t understand the information she was processing. And neither does the computer. We have given our computers metaphorical “manuals” for a huge range of things, including producing its own manuals. But it still can’t really speak Chinese.

The question that naturally comes up at this point is – well, what’s the difference, really? If we can give our computers such detailed instructions, and it can start coming up with its own instructions based on that, to the point where it’s easy for us to forget that it’s just processing the information without really “understanding” it, then what’s the difference?

The answer is something called “qualia”, which we’ll go over tomorrow. In the mean time I will leave you with a picture of Lieutenant Kusanagi and the recommendation that really, really, you should watch the Ghost in the Shell movies if you find this stuff interesting. Great intro.

*I tend to think that if it is, a lot of modern philosophy needs to be handed off to neuroscientists. And this is why my Philosophy of Mind professor hated me.

**For the record, my girlfriend is intelligent. Brilliant! And creative. And beautiful. Please don’t kill me for using you in this example, sweetheart.

7 Responses to “What’s so special about being human? Part 1”

  • Jason says:

    That’s not an argument, it’s an arbitrary definition. It’s equivalent to saying “AI can’t exist because I say so”!

    • ZenGwen says:

      Well, the Chinese Room example is an argument. The definition part of it is “This is how computers work. They just process inputs and outputs.” The argument itself, really comes from the fact that most people intuitively think that this definition doesn’t really constitute understanding something, and so for them it follows that computers aren’t actually “intelligent”, as such.

      I agree with you, though. In the end this does usually come down to philosophers saying “AI can’t exist because I say so!” Don’t worry – I haven’t finished with this topic. 🙂

      • Jason says:

        But since all the brain does is process inputs and outputs, it’s an argument based on a simple misunderstanding. I look forward to the rest with interest 🙂

  • Sandy says:

    Here’s a take on it – In Comp Sci we have a conceptual test: the well know Turing test. Admittedly passing the test is down to the subjective judgement of the tester as it comes down to a simple criterion – Can you tell me whether you are talking to a computer or a person? – and this carries with it the assertion that sentient, intelligent beings would be able to recognize other sentient intelligent beings using s suite of faculties that would be near impossible to codify in a more controlled experiment (O.K. That’s a lie, it’s built more on the assertion that if a sentient, intelligent entity, using all their faculties, can’t tell the difference then any further nay saying is pissing in the wind – in the words if Dr Graystone of TV’s Caprica “A difference that makes no difference is no difference”. However, for the sake of argument please bare with me)

    I’ll put aside the assumption that a person aught to be able to tell the difference between something that understands what it is talking about and just faking it and cut straight to an assertion that the Chinese Room, as a response to the Turing test, must make:

    It’s impossible to tell the difference between a black box (be it mechanical, electronic or biological) with real understanding and a black box sufficiently well programmed.

    If it was possible then the Turing test would pick it up.

    Id say that assertion has to mark out the concept as bunk as it is an attempt to make assertions about a thing that the model itself declares unknowable.

    (that say’s nothing of the fact that, given the rise of the non deterministic algorithm, the model paints and inaccurate picture of the insides of the black box – it is, in fact, a straw man fallacy.)

    • ZenGwen says:

      Yeah. I always thought that our Philosophy classes would have benefited from the presence of some actual computer scientists, because so few people there really understood computers. I’d at least done some actual programming. Again – this is why my Philosophy of Mind professor hated me. (The Prof hating you for being knowledgeable about something which contradicts him is actually a GREAT way to learn, by the way.)

      But what he would say is: the difference is Qualia. Which I will talk about tomorrow. You and Jason are both going to hate it, for the same reasons I do, I think!

    • Dragon Dave says:

      I’m with Randall Munroe on this one: http://xkcd.com/810/

Leave a Reply