essentialsaltes (
essentialsaltes) wrote2012-09-07 03:42 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Guilty Robots, Happy Dogs, by David McFarland
Subtitled The Question of Alien Minds, this book purports to investigate that question. Unfortunately, I wasn't too impressed. The author is an expert in animal behavior, who's gone on to work in robotics. Perhaps the best part of the book is the discussion of various interesting results in animal behavior, and some related results in human behavior.
For instance, if rats are deprived of water, and then let in a special room where they can drink as much as they like, but the temperature can be varied, they will drink less the hotter the room is. So in some way, there is a trade-off in comfort (or discomfort). If you take college students and let them play videogames in a room that gets colder and colder, they too will call it quits when the sum of the fun of the game matches the un-fun of how cold the room is.
In either case, it seems to show that critters have an intuitive or instinctual grasp of 'economics'. Now researchers are currently (very correctly) quite skeptical of claims of animal minds, and it's not too hard to see that certain 'reflexes' or procedural rules could produce the same behavior without any mentality being involved. But I wonder why the human beings get a pass. Introspectively, I generally don't consciously decide when it's too cold, I become aware of when it's too cold. Whatever process is going on is unconscious.
Probably a necessary evil in the book is the repetition and explanation of various philosophical takes on behavior. But the unhappy result is that, as everything gets explained five different ways, the book turns into a kind of unfunny monologue like, "Functionalists drive like this, while realists drive like this."
And in some cases, McFarland just dodges the question of alien minds.
No, I think the question of interest is whether it's possible or not. Thanks for telling us that experts are on both sides of the issue. Sorry you couldn't be bothered to flesh out the arguments on each side, much less pick a side.
Occasionally, the author seems to suggest that some sort of scientific experiment will be able to settle the question of whether animals have subjective experience (aka qualia). I just don't see how that's possible; the idea is faintly ridiculous [and ultimately, I think this reflects the fact that qualia don't exist (at least not in the way that trees and rocks exist)]. I mean when you were in third grade and someone suggested that what you see as green is what she sees as red... it becomes clear that you really can't see what someone else sees as they see it. The only subjective experience you have any access to is your own. I'm not denying anyone their subjective experience, but if you had to demonstrate it in an experiment, you'd be in a pickle. You show someone a red card and they say they see red, but the neuroscientist shows you some MRI scans and says... yep, there's the visual center lighting up, and the red neuron is wiggling. That's all it is, just a reflex reaction of light-sensitive pigments in the eye relaying some information to the proper brain center... there's no redness there. It's just electrically fizzing meat. Sure, the poor sucker 'says' he sees red, but that's just some meat flapping. That's not data output from our science things.
This isn't entirely academic either. Sure, we now consider that all humans have basically the same subjective experience, but it wasn't always the case. Two hundred years ago, it was commonly believed that blacks were congenitally insensitive to pain. And to be sure, there are differences in how pain is rated, based on gender and race. But what does a study like that tell us? Does it tell us that the experienced pain from the 'same' stimulus was actually of different intensity, or that it was of the same intensity, but some people are more 'stoic' than others, and report it as less intense? Or some combination of those effects? Subjects can point at frowny faces, or speak a number referring to their pain, but we can't measure that subjective pain itself.
A related idea that occurred to me some time ago involves the Turing Test. Some people argue that a computer that could pass the Turing Test would still not have demonstrated that it was conscious. Strangely, I agree, but they're not going far enough. While humans can presumably pass the Turing Test, it doesn't prove that they have subjective experiences either.
TGIF
For instance, if rats are deprived of water, and then let in a special room where they can drink as much as they like, but the temperature can be varied, they will drink less the hotter the room is. So in some way, there is a trade-off in comfort (or discomfort). If you take college students and let them play videogames in a room that gets colder and colder, they too will call it quits when the sum of the fun of the game matches the un-fun of how cold the room is.
In either case, it seems to show that critters have an intuitive or instinctual grasp of 'economics'. Now researchers are currently (very correctly) quite skeptical of claims of animal minds, and it's not too hard to see that certain 'reflexes' or procedural rules could produce the same behavior without any mentality being involved. But I wonder why the human beings get a pass. Introspectively, I generally don't consciously decide when it's too cold, I become aware of when it's too cold. Whatever process is going on is unconscious.
Probably a necessary evil in the book is the repetition and explanation of various philosophical takes on behavior. But the unhappy result is that, as everything gets explained five different ways, the book turns into a kind of unfunny monologue like, "Functionalists drive like this, while realists drive like this."
And in some cases, McFarland just dodges the question of alien minds.
"[The putatively unconscious robot] may be programmed to refer to its machine self, but there would be nothing that it is like to be such a robot. However, there are some scientists and engineers that are confident that they can, and will, produce robots that are ‘conscious’ in the sense that they do have phenomenological awareness and ‘emotions’. The question is: do you want one? Would you buy one?"
No, I think the question of interest is whether it's possible or not. Thanks for telling us that experts are on both sides of the issue. Sorry you couldn't be bothered to flesh out the arguments on each side, much less pick a side.
Occasionally, the author seems to suggest that some sort of scientific experiment will be able to settle the question of whether animals have subjective experience (aka qualia). I just don't see how that's possible; the idea is faintly ridiculous [and ultimately, I think this reflects the fact that qualia don't exist (at least not in the way that trees and rocks exist)]. I mean when you were in third grade and someone suggested that what you see as green is what she sees as red... it becomes clear that you really can't see what someone else sees as they see it. The only subjective experience you have any access to is your own. I'm not denying anyone their subjective experience, but if you had to demonstrate it in an experiment, you'd be in a pickle. You show someone a red card and they say they see red, but the neuroscientist shows you some MRI scans and says... yep, there's the visual center lighting up, and the red neuron is wiggling. That's all it is, just a reflex reaction of light-sensitive pigments in the eye relaying some information to the proper brain center... there's no redness there. It's just electrically fizzing meat. Sure, the poor sucker 'says' he sees red, but that's just some meat flapping. That's not data output from our science things.
This isn't entirely academic either. Sure, we now consider that all humans have basically the same subjective experience, but it wasn't always the case. Two hundred years ago, it was commonly believed that blacks were congenitally insensitive to pain. And to be sure, there are differences in how pain is rated, based on gender and race. But what does a study like that tell us? Does it tell us that the experienced pain from the 'same' stimulus was actually of different intensity, or that it was of the same intensity, but some people are more 'stoic' than others, and report it as less intense? Or some combination of those effects? Subjects can point at frowny faces, or speak a number referring to their pain, but we can't measure that subjective pain itself.
A related idea that occurred to me some time ago involves the Turing Test. Some people argue that a computer that could pass the Turing Test would still not have demonstrated that it was conscious. Strangely, I agree, but they're not going far enough. While humans can presumably pass the Turing Test, it doesn't prove that they have subjective experiences either.
TGIF
no subject