Personally, I find the entire "training vs. programming" debate absurdly lame. Who cares? the only difference is that training, is non-direct, and works through self-programming vis a vis positive and negative stimuli, and programming is direct, "writing-to-metal". Learning is basically the "self service" version of the first one, and I don't understand how the flexibility of nervous systems makes them holy or anything like that.
Announcement
Collapse
No announcement yet.
Queen bans fox hunting!
Collapse
X
-
Originally posted by Kuciwalker
Why not? It seems to take its philosophical roots directly from social liberalism (or they both take their roots from the same source).
Comment
-
Not sure if anybody else addressed these issue's .. but going back to the initial post, im not sure why the Queen is talked about, its her consitutional duty to introduce the bills chosen by the elected government , so i don't think she really has much say in the matter.
As for why its taken so long for the Labour Government to address this, well, thats because of the processes in parliament. First they had to put together a law, but because Tony Blair wanted to compromise, the Labour party kept voting it down, and whenever they did have agreement, the House of Lords would throw it out. The lords is allowed to do this several times. In the end, it just wasnt classed as important enough to push through, so a private members bill was put together, and again the Lords through it out twice.. 7 years later, finally the Parliament act is used to push it though without the lords approval.
Personally, I don't agree with this law, it will lead to Foxes being shot instead of hunted (as has previously been said) and is only serving to create divisions between the countryside and the cities. I also think the law technically is unworkable, I will be writing to my local police to ask them how much time they will waste on this issue, and wether they were given additional funding to chase the real criminals in our society ... I think not. !"Wherever wood floats, you will find the British" . Napoleon
Comment
-
Originally posted by Azazel
Personally, I find the entire "training vs. programming" debate absurdly lame. Who cares? the only difference is that training, is non-direct, and works through self-programming vis a vis positive and negative stimuli, and programming is direct, "writing-to-metal".
Computers and robots do not learn new skills, they can only be programmed. Programs cannot improve and enhance themselves in general, and certainly cannot develop new routines to perform tasks not written in the original code.(\__/) 07/07/1937 - Never forget
(='.'=) "Claims demand evidence; extraordinary claims demand extraordinary evidence." -- Carl Sagan
(")_(") "Starting the fire from within."
Comment
-
wait, so if there was a program that would develop itself ( which is theoretically possible) , it would suddenly aquire rights, feel pain, etc.? crap-o-plenty.
After all, remember we learn through the evolution of neural connections, not because we "choose" to learn, since any such choice is in the neural connections as well, and amounts to tautology.
Comment
-
Originally posted by Urban Ranger
The question is whether these small variations have any meaning in the macroscopic level.
Since these small variations can occur in the womb, when the embryo consists of only a few cells, yes, these "small variations" (assuming you could get everything else exact) would have an effect! In fact, If you had read my post properly, you would notice that I mentioned that it's a chaotic system, therefore small peturbations cause large variations.
Not quite. Our brain works sufficiently differently from a computer that it is rather meaningless to apply the term "computation" as we defined the term.
1) No it's not.
2) I'm not talking about computation in the sense of specificially a desktop computer. I'm talking about computation in the sense of, say, a liter of water - which computes the motions of all its particles. Every particle is a computer that computes/determines its own actions. Every system of particles is therefore a large number of computers running in parallel. These can be arranged to act as a single processor that takes inputs and gives outputs that are meaningful to people, but that's not what I'm talking about.
Comment
-
Originally posted by Urban Ranger
This is most definitely untrue, if "training" is "learning new skills from experience." No robot can do that. In fact, I am not sure if there are robots that can improve known skills from experience.
2) A sufficiently powerful robot certainly could learn new skills towards a certain goal. Humans are just very powerful computers with very general goals. Anyway, the lack of a robot that does this presently (I'm not sure that's true, but whatever) has nothing to do with what I'm saying. It's obviously possible.
Humans are also robots. What's your point?
I've said that humans are robots too! At least twenty times in this thread! READ THE ****ING THREAD!
This is wrong, as a computer microphone does not hear. It merely converts one form of energy into another.
Same with the human ear. It converts energy in a sound wave (it actually, IIRC, uses very little of the actual energy) into electrochemical impules. You lose.
A animal does not need to be self-aware to be able to interpret signals in its head. A dog actually recognises different things, such as its master, other people, food, cats (nemesis), etc.
That's my point. It's OBVIOUS that dogs recognize those things; but I can have a robot that recognizes things. The problem is that a dog isn't aware of them, in the sense of cognito ergo sum.
Comment
-
Originally posted by Urban Ranger
How do you show that you are sentient?
I know that I am sentient - I'm aware of it! DUH!
That's not true. Or rather, the devil is in the details.
No two person's "pattern" are the same - by "pattern" I am presuming you mean the arrangement of the neurons in your brain and their interconnections.
Now, even though the "pattern" of any two random persons are largely the same, it's the minor differences that cause the big differences in mental abilities.
I know this, that's what I was saying! The overall pattern, the overall mechanism of thought is largely the same, therefore even though at the very small level there are individual differences, they are practically similar.Last edited by Kuciwalker; November 21, 2004, 11:07.
Comment
-
Originally posted by Urban Ranger
Trees don't have nerves. Isn't that simple?Last edited by Kuciwalker; November 21, 2004, 11:06.
Comment
-
[QUOTE] Originally posted by General Ludd
As planets or life are an emergent property of matter, they are also made of matter.
If consciousness is born of computation, it must also be made of computation. And so in that respect, saying "I am conscious" is in effect nothing more than acknowledging or comfirming that your thoughts are calculated responses to stimuli - it's reconizing yourself for what you are. But being aware that your thoughts are computational responses does not mean that you have any control over them. And if it did, would they still be computational responses?
But if you don't think that other animals (except perhaps primates) are self-aware, than I again go back to my example of training a dog and ask how a dog is capable of leading a blindman down a street safely if it is not first aware of itself, it's surroundings, how other beings percieve it, and how it can influence it's environment.
Because it's a different type of awareness. In the sense you are using, a computer is "aware" of anything in memory. In the sense I'm using, you are "aware" in the sense of cognito ergo sum (yes, I'm using that a lot, but it's the only way I can express it, really). When you see, you aren't just "aware" (in your sense) of a matrix of nervous impulses; you actually see the picture in your mind (which is not quite the same thing as your brain).
Comment
-
Originally posted by General Ludd
As planets or life are an emergent property of matter, they are also made of matter.
If consciousness is born of computation, it must also be made of computation. And so in that respect, saying "I am conscious" is in effect nothing more than acknowledging or comfirming that your thoughts are calculated responses to stimuli - it's reconizing yourself for what you are.
That assumes that the person has the logical abilities to identify that consciousness arises from computation - actually, more than that, because it is an empirical claim. Simply identifying oneself as conscious does not lead to acknowledging that one is deterministic (obviously, else we wouldn't be arguing over that point).
But being aware that your thoughts are computational responses does not mean that you have any control over them. And if it did, would they still be computational responses?
We do have control over them, because we do the computations.
Comment
-
Originally posted by Kuciwalker
But if you don't think that other animals (except perhaps primates) are self-aware, than I again go back to my example of training a dog and ask how a dog is capable of leading a blindman down a street safely if it is not first aware of itself, it's surroundings, how other beings percieve it, and how it can influence it's environment.
Because it's a different type of awareness. In the sense you are using, a computer is "aware" of anything in memory. In the sense I'm using, you are "aware" in the sense of cognito ergo sum (yes, I'm using that a lot, but it's the only way I can express it, really). When you see, you aren't just "aware" (in your sense) of a matrix of nervous impulses; you actually see the picture in your mind (which is not quite the same thing as your brain).
1. Identify itself.
2. Identify it's surroundings.
3. Reconzie other (sentient) beings.
4. Percieve how those other beings percieve it.
5. Understand how it's actions impact the surrounding environemnt, and other beings.
How can a dog do this with out the sense of "cognito ergo sum", as you put it.
That assumes that the person has the logical abilities to identify that consciousness arises from computation - actually, more than that, because it is an empirical claim. Simply identifying oneself as conscious does not lead to acknowledging that one is deterministic (obviously, else we wouldn't be arguing over that point).
What I am saying is that the concept of consciousness or self, if determined through calculations, can be nothing but an affirmation that all thoughts - including that of the self - are results of similiar computations. That, if your argument is to be followed, there can be no free will. A machine - wether it be a human, a fox, or a sci-fi robot, cannot control itself (or, more precisely, control the computations that make up "itself").Rethink Refuse Reduce Reuse
Do It Ourselves
Comment
-
Read my example again. It shows that a dog can
1. Identify itself.
2. Identify it's surroundings.
3. Reconzie other (sentient) beings.
4. Percieve how those other beings percieve it.
5. Understand how it's actions impact the surrounding environemnt, and other beings.
How can a dog do this with out the sense of "cognito ergo sum", as you put it.
Because a computer can do all of those things too. Cognito ergo sum = I think therefore I am. I am aware, therefore I am. As I said, when a computer gets a feed from a camera, it is "aware" of a string of 1's and 0's. When a dog sees something, it is "aware" of a set of electrochemical impulses. When you see something, you are aware of the image itself. You see it, in you mind.
It's not a matter of individuals identifying that consciousness is a result of computations, it's matter of what the implications are if that is true - that consciousness is, indeed, a result of computations.
What I am saying is that the concept of consciousness or self, if determined through calculations, can be nothing but an affirmation that all thoughts - including that of the self - are results of similiar computations.
Which is what I said.
That, if your argument is to be followed, there can be no free will. A machine - wether it be a human, a fox, or a sci-fi robot, cannot control itself (or, more precisely, control the computations that make up "itself").
Yes, it can! It is controlled by itself. These computations which control it are done by iteslf.
Comment
-
Originally posted by Kuciwalker
Read my example again. It shows that a dog can
1. Identify itself.
2. Identify it's surroundings.
3. Reconzie other (sentient) beings.
4. Percieve how those other beings percieve it.
5. Understand how it's actions impact the surrounding environemnt, and other beings.
How can a dog do this with out the sense of "cognito ergo sum", as you put it.
Because a computer can do all of those things too.
Cognito ergo sum = I think therefore I am. I am aware, therefore I am. As I said, when a computer gets a feed from a camera, it is "aware" of a string of 1's and 0's. When a dog sees something, it is "aware" of a set of electrochemical impulses. When you see something, you are aware of the image itself. You see it, in you mind.
It can't reconize a sentient being, niether can it put itself into that other being's "eyes" and understand how that being is percieving it and likewise, it can not understand or anticipate how other beings would react to it's actions. That is awareness.
That, if your argument is to be followed, there can be no free will. A machine - wether it be a human, a fox, or a sci-fi robot, cannot control itself (or, more precisely, control the computations that make up "itself").
Yes, it can! It is controlled by itself. These computations which control it are done by iteslf.
Or do you suggest that there is something within us that is outside of these calculations - outside of our mechanical body - that is able to input extra information and influence the calculations that we make?Rethink Refuse Reduce Reuse
Do It Ourselves
Comment
Comment