2012: Advent Philosophy: What’s so special about being human? Part 2

by on December 6, 2012

A couple of people, unsurprisingly, have taken issue with The Chinese Room. (Hooray! We are succeeding at Philosophy!) The question that we finished off with was – what’s the difference? If a computer can become so complex as to convince us that it really understands rather than just blindly processing information, why shouldn’t we say that it does understand? This seems like philosophers are just imposing an arbitrary definition because they don’t like that computer scientists came up with a good definition of sentience before they did.

But the philosophers have come up with a solution. Although modern philosophers have more or less stopped defending the notion of a soul, there has always been a divide between those who believe that things which don’t physically exist can still exist in another way, and those who don’t believe that. Thousands of years of talking about souls is a hard habit to break. It started with Descartes suggesting that maybe the soul and brain interfaced in the pineal gland, which was pretty advanced for the 1600s! Now we have progressed to… drumroll please…. qualia.

Qualia are supposed to be the unique mental qualities associated with experiences. The ability to have qualia is dependent on the physical experience-sensing machinery of the brain, but it is not the same as this machinery, and that machinery being present does not necessarily mean that the being it’s present in can have qualia. Philosophers would say that the ability to have qualia supervenes on the physical properties. Let’s break out the Venn diagrams:

Actually, I have no independent verification that Karl qualifies.

Let me guess: this is still seeming like an arbitrary definition, right? Then let’s go through another popular thought experiment.

Fine wine is big money. Connoisseurs will pay a lot for the most subtle, fine wines. Practically every wine, including the cheap £5 white I had with dinner, has a flowery description on the back. Dry and subtle, with undertones of crisp apple! What an interesting bouquet! Try it with the most expensive fish you can buy. The same is true of most other alcohols; substitute in your favorite tipple here. Personally, I favour a peaty single malt whisky, with a bit of water added to unlock the flavors. Now, we should be able to track all of these subtleties of flavor on to the physical properties of the alcohol. Given enough funding from vintners, it would be possible to build a machine which very accurately analyses these properties and comes back to us not only with a flowery description to put on the bottle, but also suggestions of some foods to try with it, whether it would taste best chilled, and so on. Everything that I would be able to tell you about my favorite malt. If we plugged it into a nice sociable AI, that AI could chat Highland vs. Speyside whiskies with me.

So it wouldn’t be totally irrational to say that this machine experiences the same thing we do when we we drink alcohol. And yet something seems like it’s missing. Intuitively, it’s somehow just not the same. I kind of pity the AI. So close, and yet so far!

What’s missing here is qualia. Because the AI doesn’t have qualia, it can’t get the full experience. And the reason that it doesn’t have qualia is that it is not a sentient mind like I am.

Bender disagrees.

Well, that’s the argument. That is supposed to be what distinguishes artificial intelligences from natural intelligences. I have to be honest here; I think it’s bull. Neuroscience is making it increasingly clear that our brains really are just incredibly complex processors of inputs and outputs, and the “Computer = Brain” metaphor is valid, even if we don’t have a computer anywhere near as powerful as a human brain yet. I think a heck of a lot of Philosophy of Mind needs to be handed off to neuroscientists and programmers. But the problem is, that would put a lot of philosophers out of work, so you’re getting a bunch of old guys going LALALALALA HUMANS ARE SUPER-SPECIAL AND MY FIELD IS STILL RELEVANT I CAN’T HEEEEAAAAR YOOOOOUUUU!

Philosophical genius.

Tomorrow I’m going to start a couple days of profiling really cool and lesser-known philosophers, starting with the guy who does the best job of pointing out exactly why all this qualia stuff is bunk.

4 Responses to “What’s so special about being human? Part 2”

  • Sandy says:

    If philosophers want to make qualia part of the human condition then part of the Turing Test would be to be able to convincingly discuss qualia.

    If a given black box can convincingly discuss qualia (we do not know whether it is an organic entity in the box or not), can talk about how it feels the warmth and relaxant qualities of beer and give reasoned preferences (I just love the strong malty flavor that tends to come from a a ruby ale, not so keen on charcoal flavor that can accompany some darks) on what basis do we decide whether or not that black box is experiencing qualia?

    Another question – It’s reasonable to presuppose that any Turing passing machine may have different sensory inputs from humans. Unlikely to have a nerve saturated membrane covering whatever physical structure houses it. Likely to be able to accurately observe almost all wavelengths of light and types of energy. Likely to have all of human knowledge directly accessible, if only it can navigate the archive. If that AI can start talking about patterns in background radiation, or the structural patterns of data organization on the world wide web in a similar fashion that we talk about beer, or The Bible, or that band we saw playing last week where does that leave the concept of qualia?

  • Jason says:

    In a way, I think you’re being a little too harsh on the idea of qualia 🙂

    It seems like a perfectly sensible (though unnecessarily fancy) description of an emergent property of a complex system; it’s just not an argument against the possibility of AI.

    In fact, I’d argue it’s already in evidence with quite simple computational models. How would you define qualia to exclude the pattern that leads a neural network to recognise a hand-written digit “2”?

    At the very least, though, the innermost set of your venn diagram should include “all mammals, all birds, and octopuses” 🙂

    • ZenGwen says:

      Well, maybe octopuses have qualia. I wouldn’t be too surprised! But with most animals, it’s hard to know, as they can’t exactly testify to it, and I think the consensus is that they probably don’t have the required subtlety of thought. To be able to have qualia you need to not just be able to do things, but be able to reflect on what it’s like to have done them.

      I’m not sure I understand what you mean by the “recognizing a hand-written 2” example. Having qualia doesn’t necessarily follow from having the ability to recognize the digit as hand-written, although being able to do that is necessary to have qualia (that’s what the Venn diagram was supposed to get across). The qualia of recognizing that would be the mental/emotional feeling of experiencing that recognition. How doing that seems to you.

      I think it would be natural to say, for example, that no computer will ever be able to quite replicate the experience of recognizing your toddler’s scribble as the digit “2”. Intuitively, artificial minds just couldn’t reach that level of subjective emotional experience.

  • Jason says:

    I mention octopuses as they were specifically mentioned in the “Declaration of Consciousness” earlier this year: http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf

    (thinking about the rest still)

Leave a Reply