@ mods: this isn't a troll thread. About six or seven months ago, IIRC, Spiffor and I were engaged in a pretty good discussion on animal rights that boiled down to an argument about consciousness. We never finished, and rather than resurrect the thread (which is probably in the archives now) I'm just starting it off again, principally because I've arrived at a somewhat different (and simpler) argument than previously. While I know that there is a general rule against threads mentioning a specific poster, as I understand it this thread does not violate the spirit of the rule, it being an honest request for a serious discussion. I'm sorry if I've misunderstood the intent behind the rule.
[/disclaimer]
Ultimately, my case is that humans are sentient/self-aware/conscious (terms which I will use interchangeably) and that animals are not, except maybe our very close primate relatives. I'm going to lay down a number of axioms about the nature of consciousness and the nature of reality which seem reasonable to me.
First, I'd like to make a specific distinction between sensation and feeling, as I will use them. Sensation is simply what we call stimulus to some object that acts as if it performed some sort of logical computation on its stimuli to determine its response; thus, a flower senses light but an electron does not sense an electric field. Fundamentally there is no difference between those two, to me, but it's a useful distinction for practical purposes. Feeling is what resutls when a conscious being senses things - for instance, your eyes don't merely sense light; you feel (or rather see) it as an actual image, in your mind. Unfortanetly, it is extremely difficult or impossible to describe feeling and consciousness without talking in circles, but since (hopefully) you are also self-aware you should understand what I mean.
Pain is a sensation. When we sense pain, we also feel it, and since we dislike it strongly we attribute (in general) a negative moral value to it. However, at a basic level pain is merely one stimulus among many, one that happens to be a signal that some part of the body is damaged or in danger of becoming damaged. A robot, which had internal sensors to measure stress on its components, could be said to sense pain when those sensors were triggered, but we would not attribute any negative moral value to it - in fact, I doubt anyone would have a problem if we "tortured" several of these robots to death in order to test that the pain sensors functioned properly and that the robots responded properly to them, because they're not self-aware. There is no negative moral value to pain in a nonsentient being - we routinely, constantly genocide millions or billions of beings in a horrific way, by causing them to implode, but no one cares because they're bacteria.
My first assumption is that consciousness is observable. The point of this is that if we *cannot* observe, even indirectly (especially since there is no direct way to observe consciousness like one can measure an electric field), that something is conscious, then we must conclude that it is not. While there's no particular reason why an electron couldn't be self-aware, there's no more reason to believe that electrons are self-aware than to believe that their surfaces harbor entire clans of dancing leprechauns (or the existance of God
). If we do observe whatever particular criteria we determine to be sufficient evidence of consciousness, then we must conclude that the being is self-aware. My criterion for this is that *any* being that communicates the concept of sentience, without having had the concept imparted to it by another sentient being (such as in the case of an intelligent but nonsentient being that has read about consciousness), must be sentient. This is because I do not see any way an intelligent but nonsentient being could come up with the concept of sentience.
My second assumption is that consciousness arises solely from the emergent behavior of the fundamental particles that make up our brains. By this I don't mean that we are the only possible conscious beings, but that reality is not dualist, there is no ethereal soul from another plane that decides to attach itself to our bodies. Effectively, we too are just machines, and it is some fantastic property of matter that somehow consciousness sometimes arises from its interactions.
My third assumption is that any physical system can be modelled with a Turing machine - the important result of this being that the behavior of any physical system can be predicted with arbitrary precision by an electronic computer with sufficient size and power (the specific requisite amounts of each being dependent on the particular system being modelled).
The first conclusion is from the first and second assumptions: human beings (in general; I will address the particular limits and exceptions later) are sentient. I know I am sentient. In addition, I know that there are other sentient humans, because of all the philosophers in the past who talked about it. Knowing from my second assumption that my sentience and theirs is the result of the construction of our brains, I begin to suspect that human brains are all fundamentally similar, in such a way as to make humans sentient. This is, I believe, reasonable.
The second conclusion is from the previous conclusion and all three assumptions: a computer can be sentient. Since a computer of sufficient size and power can simulate the behavior of a physical system containing a human being, it can simulate the behavior of that human being, and since consciousness is due only to the physical constitution of the human brain, it can simulate sentience. But then it would satisfy the criterion in the first assumption. The computer would, in effect, be a human being.
(Actually, I forget how I was going to use that conclusion in my argument
)
Finally, I conclude that animals, except possibly our close primate relatives, are not sentient, simply because there's no reason to believe that they share the fundamental similarity that human brains have, and because I've never had the concept of consciousness communicated to me in any way by them.
It may be objected that certain emotional displays of animals, especially mammals, suggest that they are sentient, but I see no necessary connection, either way, between sentience and such displays, especially given that it would be possible to create a relatively simple robot that would be furry and such and make identical displays while being incontrovertibly nonsentient. (This isn't currently possible mostly because we don't have access to quite as good motors and servos as nature has designed over millions of years.)
It may also be objected that certain animals recognize themselves in a mirror, and therefore must be self-aware. If the term self-awareness is used so literally, it is not interchangeable with sentience, since there is no necessary or even plausible connection between image recognition and sentience. For example, a program could be written that identifies an object in a camera view based on images and 3D data provided to the program at the beginning ("look for this"), and then this program could be loaded onto a robot which is also the object the program is set to recognize. However, the same program would recognize something else as self if one simply gave it the data of another robot. In addition, I think it's just generally absurd to claim that any good image-recognition software is by definition sentient.
It has been objected in the past that sentience can sentience could have evolved gradually, but I don't even understand what the heck it means to be, say, "half sentient", and I don't think sentience is something on a continuum. In response to this, the question has been raised "how did humans suddenly magically get it"? This is why I'm willing to concede that some primate relatives may be sentient; I think sentience may have evolved as an aid to empathy, allowing primates to create larger and stronger social organizations, and ultimately giving people the ability to form arbitrarily large social organizations even to the concept of the human race. Such organizations are formed from the extension of the concept of self onto a ever-larger bodies - my family, my tribe, my city, my nation, my race... and once you consider others as part of yourself, you consider their self-interest as part of your own, which is a very good trait for a society in evolutionary terms.
[/disclaimer]
Ultimately, my case is that humans are sentient/self-aware/conscious (terms which I will use interchangeably) and that animals are not, except maybe our very close primate relatives. I'm going to lay down a number of axioms about the nature of consciousness and the nature of reality which seem reasonable to me.
First, I'd like to make a specific distinction between sensation and feeling, as I will use them. Sensation is simply what we call stimulus to some object that acts as if it performed some sort of logical computation on its stimuli to determine its response; thus, a flower senses light but an electron does not sense an electric field. Fundamentally there is no difference between those two, to me, but it's a useful distinction for practical purposes. Feeling is what resutls when a conscious being senses things - for instance, your eyes don't merely sense light; you feel (or rather see) it as an actual image, in your mind. Unfortanetly, it is extremely difficult or impossible to describe feeling and consciousness without talking in circles, but since (hopefully) you are also self-aware you should understand what I mean.
Pain is a sensation. When we sense pain, we also feel it, and since we dislike it strongly we attribute (in general) a negative moral value to it. However, at a basic level pain is merely one stimulus among many, one that happens to be a signal that some part of the body is damaged or in danger of becoming damaged. A robot, which had internal sensors to measure stress on its components, could be said to sense pain when those sensors were triggered, but we would not attribute any negative moral value to it - in fact, I doubt anyone would have a problem if we "tortured" several of these robots to death in order to test that the pain sensors functioned properly and that the robots responded properly to them, because they're not self-aware. There is no negative moral value to pain in a nonsentient being - we routinely, constantly genocide millions or billions of beings in a horrific way, by causing them to implode, but no one cares because they're bacteria.
My first assumption is that consciousness is observable. The point of this is that if we *cannot* observe, even indirectly (especially since there is no direct way to observe consciousness like one can measure an electric field), that something is conscious, then we must conclude that it is not. While there's no particular reason why an electron couldn't be self-aware, there's no more reason to believe that electrons are self-aware than to believe that their surfaces harbor entire clans of dancing leprechauns (or the existance of God

My second assumption is that consciousness arises solely from the emergent behavior of the fundamental particles that make up our brains. By this I don't mean that we are the only possible conscious beings, but that reality is not dualist, there is no ethereal soul from another plane that decides to attach itself to our bodies. Effectively, we too are just machines, and it is some fantastic property of matter that somehow consciousness sometimes arises from its interactions.
My third assumption is that any physical system can be modelled with a Turing machine - the important result of this being that the behavior of any physical system can be predicted with arbitrary precision by an electronic computer with sufficient size and power (the specific requisite amounts of each being dependent on the particular system being modelled).
The first conclusion is from the first and second assumptions: human beings (in general; I will address the particular limits and exceptions later) are sentient. I know I am sentient. In addition, I know that there are other sentient humans, because of all the philosophers in the past who talked about it. Knowing from my second assumption that my sentience and theirs is the result of the construction of our brains, I begin to suspect that human brains are all fundamentally similar, in such a way as to make humans sentient. This is, I believe, reasonable.
The second conclusion is from the previous conclusion and all three assumptions: a computer can be sentient. Since a computer of sufficient size and power can simulate the behavior of a physical system containing a human being, it can simulate the behavior of that human being, and since consciousness is due only to the physical constitution of the human brain, it can simulate sentience. But then it would satisfy the criterion in the first assumption. The computer would, in effect, be a human being.
(Actually, I forget how I was going to use that conclusion in my argument

Finally, I conclude that animals, except possibly our close primate relatives, are not sentient, simply because there's no reason to believe that they share the fundamental similarity that human brains have, and because I've never had the concept of consciousness communicated to me in any way by them.
It may be objected that certain emotional displays of animals, especially mammals, suggest that they are sentient, but I see no necessary connection, either way, between sentience and such displays, especially given that it would be possible to create a relatively simple robot that would be furry and such and make identical displays while being incontrovertibly nonsentient. (This isn't currently possible mostly because we don't have access to quite as good motors and servos as nature has designed over millions of years.)
It may also be objected that certain animals recognize themselves in a mirror, and therefore must be self-aware. If the term self-awareness is used so literally, it is not interchangeable with sentience, since there is no necessary or even plausible connection between image recognition and sentience. For example, a program could be written that identifies an object in a camera view based on images and 3D data provided to the program at the beginning ("look for this"), and then this program could be loaded onto a robot which is also the object the program is set to recognize. However, the same program would recognize something else as self if one simply gave it the data of another robot. In addition, I think it's just generally absurd to claim that any good image-recognition software is by definition sentient.
It has been objected in the past that sentience can sentience could have evolved gradually, but I don't even understand what the heck it means to be, say, "half sentient", and I don't think sentience is something on a continuum. In response to this, the question has been raised "how did humans suddenly magically get it"? This is why I'm willing to concede that some primate relatives may be sentient; I think sentience may have evolved as an aid to empathy, allowing primates to create larger and stronger social organizations, and ultimately giving people the ability to form arbitrarily large social organizations even to the concept of the human race. Such organizations are formed from the extension of the concept of self onto a ever-larger bodies - my family, my tribe, my city, my nation, my race... and once you consider others as part of yourself, you consider their self-interest as part of your own, which is a very good trait for a society in evolutionary terms.
Comment