I propose that everyone gets communication equipment implanted in their heads. That way, everyone would be able to access Apolyton 24 hours a day (even while sleeping) and thus, everyone would obviously be happier
Announcement
Collapse
No announcement yet.
Prove Stefu your ideal society would result to greatest good for greatest number
Collapse
X
-
My ideal society would be focused entirely on the survival of the human race. Probably utilitarian most of the time, to keep people from self-destructing, but they're not getting lollipops if there's an asteroid headed for Earth.
Nuclear-armed nations are a threat to human survival. Solution: Erase nations.
Economically, pragmatic.
Comment
-
Hmph. I say you're all looking at it the wrong way.
From a utilitarian perspective, the ideal society would mean:
All the matter of the universe has been converted into gigantic and optimally efficient (quantum?) computers. On these computers, minds are simulated in very happy situations. Or, you could just "stimulate their pleasure centers" directly.
This means I have all of you beaten by a factor of maybe 10^50 or 10^100 or infinity (this depends on physics).
It also follows that for a utilitarian, this goal outweighs all other considerations if there is a non-zero probability that you're able to influence it (and there is).
I'm no longer a utilitarian though. I've never seen any proof of utilitarianism. This bothers me. Though all other moral systems I've seen suck even more. And most of the arguments used against utilitarianism are ridiculous. ("utilitarianism says X is moral. X is immoral according to my moral system. Therefore, utilitarianism is wrong")
Comment
-
That scenario of yours would take long time to complete, and we can't predict what calamities could occur while we're trying to do it. Therefore, it's more reasonable to just try to make society a better place.
And even if you are striving for your scenario to happen, it doesn't stop you from applying the utilitarian principles to smaller things in life."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
Originally posted by Stefu
That scenario of yours would take long time to complete, and we can't predict what calamities could occur while we're trying to do it. Therefore, it's more reasonable to just try to make society a better place.
Besides, even if it takes a long time to complete, it shouldn't take too long to start the process, and once the process is underway most of the danger is gone.
And even if you are striving for your scenario to happen, it doesn't stop you from applying the utilitarian principles to smaller things in life.
I say most or all "folk morality" comes from lack of intuition for very large numbers
Comment
-
You should apply them (almost) only in as far as it makes the scenario more likely. Again: 50 or 100 or an infinity of orders of magnitude (or something somewhere in between of course). Making society a better place should be irrelevant to a utilitarian unless "better" only means "more likely to colonize the universe and convert it to happiness-generating material".
Present and near-future happiness is irrelevant because there's, relatively speaking, almost nothing of it."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
[QUOTE] Originally posted by Stefu
Well, first of all, what is the deal with colonizing the whole universe to turn it into happiness-generating material, anyway? There's clearly a limit to how happy a person can be, even if his happiness centers are constantly stimulated.[quote]
How do you know that? How do you define happiness? Is happiness the same as pleasure minus pain or something else?
And maximizing happiness doesn't mean that there should be as many happy people around as possible, but that average person should be as happy as possible.
In that case, I'm not sure you're wrong Though you would then have to consider all the aliens on other planets that screw up the average, so that colonization would probably outweigh the rest anyway.
But that's a weird freakish watered-down sissy version of utilitarianism, not worthy of the name. Not the real thing. Real utilitarianism means maximizing total, not average happiness.
Maximizing average happiness is a silly idea. For one thing, it would mean that in some situations, it would be moral to kill billions of happy people as long as the few people that were left were on average a little happier
It would mean that a world with 10^1000 perfectly happy people is just as valuable as a world with 1 perfectly happy person. And that a world with 10^1000 people being constantly tortured is just as bad as a world with only one person being constantly tortured. I.e. if one person was already being tortured, a mad scientist could create billions of other people in his lab and torture them a little less than that one person, and it wouldn't be immoral.
It would mean that people's lives don't have any moral value if their happiness happens to be below average. And how do you figure out that average? You'd need to know the average happiness of the *entire multiverse*. The morality of decisions that only affect, say, planet Earth or your family, would depend (at least in some situations) on how the aliens in the Andromeda galaxy were feeling. Or if there exist parallel worlds, the aliens in the Andromeda galaxy in every parallel world. Or if they don't count, your moral thinking is racist. Average-utilitarianism is a non-local theory!
For another, taking the average requires that you divide by the number of people which is well-defined in our current situation but can't be well-defined in more extreme situations - AI minds could half-overlap, and there would be no non-arbitrary way to count them.
You should recognize that when you take average happiness, instead of looking at the amount of happiness you're looking at the amount of happiness that happens to be generated in a certain location - in this case inside human skulls.
(I'm assuming you're integrating the average happiness at each moment over time?)
Oh, it's never irrelevant. For one thing, it's all we got at present, considering that it's going to take a long time after our death that this project is finished.
Also, a little bit more happiness is *always* superior to no more happiness at all, even if it is just a little bit.
And it wouldn't be too utilitarian to blow away this generation's happiness, and those of many other generations, just to reach for some utopized goal that would have a million ways of possibly going wrong.
Comment
-
How do you know that? How do you define happiness? Is happiness the same as pleasure minus pain or something else?
But that's a weird freakish watered-down sissy version of utilitarianism, not worthy of the name. Not the real thing. Real utilitarianism means maximizing total, not average happiness.
For one thing, it would mean that in some situations, it would be moral to kill billions of happy people as long as the few people that were left were on average a little happier
I.e. if one person was already being tortured, a mad scientist could create billions of other people in his lab and torture them a little less than that one person, and it wouldn't be immoral.
It would mean that people's lives don't have any moral value if their happiness happens to be below average. And how do you figure out that average? You'd need to know the average happiness of the *entire multiverse*. The morality of decisions that only affect, say, planet Earth or your family, would depend (at least in some situations) on how the aliens in the Andromeda galaxy were feeling. Or if there exist parallel worlds, the aliens in the Andromeda galaxy in every parallel world. Or if they don't count, your moral thinking is racist. Average-utilitarianism is a non-local theory!
For another, taking the average requires that you divide by the number of people which is well-defined in our current situation but can't be well-defined in more extreme situations - AI minds could half-overlap, and there would be no non-arbitrary way to count them.
And generally, there are only so many people any given decision touches upon. If that decision increases the happiness among those people, then it's a good decision. If it decreases the happiness, or doesn't increase happiness as much as some other decision would, it's a bad decision. Simple, eh?
You should recognize that when you take average happiness, instead of looking at the amount of happiness you're looking at the amount of happiness that happens to be generated in a certain location - in this case inside human skulls.
Maybe. Are you saying you can't affect anything that happens long after your death? Why not?
And a very very little extra chance of achieving the above mentioned scenario is superior to that little bit more happiness you get when concentrating on the present."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
Originally posted by Stefu
I've already defined happiness in this thread: Happiness is having your want and needs satisfied.
And would someone who has no desires at all constantly be extremely happy?
For one thing, you would be saying that it's better for world to have 30 billion people who are so-and-so than having 1 billion people who are really happy.
This doesn't seem anywhere near as counter-intuitive as the examples I gave what you must believe.
To turn this around, you would deny 29 billion people their (slightly happy) lives if it meant that the remaining 1 billion were 30 times as happy (whatever that means, if happy means satisfaction of desires).
Killing people tends to make them *most* unhappy.
Point is, killing people painlessly wouldn't even be immoral under Averageism, if those people are below average happiness. And average happiness depends on the happiness of aliens in Andromeda.
The fact he's torturing that one person alone makes it pretty damn immoral.
Well, first of all, since we don't know about aliens in Andromeda and our decisions can't affect them, we might as well treat them as non-existing.
Let's say the objective is to maximize average happiness in the universe.
Let's say the average happiness on Earth is 50.
Let's say you are in doubt whether to have a child, which you know will have happiness 60.
Let's say there are 600 billion aliens in the Andromeda galaxy, and no sentient beings anywhere but Earth and Andromeda.
If the average happiness of Andromedans is 80 (maybe the weather is really nice there), then the average happiness in the universe is 79,7 (I think). Having a child will decrease the happiness of the average sentient being.
If the average happiness of Andromedans is 20, then the average happiness in the universe is 20,3 (I think). Having a child will increase the happiness of the average sentient being.
Thus, whether you should have a child or not depends on the happiness of aliens in Andromeda.
This is absurd.
And generally, there are only so many people any given decision touches upon.
If that decision increases the happiness among those people, then it's a good decision. If it decreases the happiness, or doesn't increase happiness as much as some other decision would, it's a bad decision. Simple, eh?
Well, basically, thanks to every other decision made effecting everything else, in time your decision might as well have achieved nothing or achieved the exact opposite of what you wanted.
Comment
-
How do you define wants and needs?
How do you know any person's wants and needs are finite?
Do you agree with the "expressed" or "informed" preferences as defined in the FAQ?
We just have to find a way that, in the end, leads to most happiness.
And would someone who has no desires at all constantly be extremely happy?
To turn this around, you would deny 29 billion people their (slightly happy) lives if it meant that the remaining 1 billion were 30 times as happy (whatever that means, if happy means satisfaction of desires).
Only for a very short time, if you do it the right way.
Point is, killing people painlessly wouldn't even be immoral under Averageism, if those people are below average happiness.
Of course killing people would be immoral. Not only because they don't want it, but because a society which kills people because of such things would be so twisted merely living in it would make you unhappy.
I'm just saying that, if someone else is torturing that person, it would not be immoral for him to torture billions of other people, because it wouldn't affect the average.
Are you saying you'd rather have 1 last person on Earth heavily tortured than 100 last people on Earth only lightly tortured?
Wrong.
Let's say the objective is to maximize average happiness in the universe.
Let's say the average happiness on Earth is 50.
Let's say you are in doubt whether to have a child, which you know will have happiness 60.
Let's say there are 600 billion aliens in the Andromeda galaxy, and no sentient beings anywhere but Earth and Andromeda.
If the average happiness of Andromedans is 80 (maybe the weather is really nice there), then the average happiness in the universe is 79,7 (I think). Having a child will decrease the happiness of the average sentient being.
If the average happiness of Andromedans is 20, then the average happiness in the universe is 20,3 (I think). Having a child will increase the happiness of the average sentient being.
Thus, whether you should have a child or not depends on the happiness of aliens in Andromeda.
This is absurd.
Maybe. Probably not. When you're not sure of the consequences, make an estimate."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
Originally posted by Stefu
Again, I've already defined these earlier in thread. Most every person has need to (...)
I would still like to know:
- how you define one "person" (in the case of, say, the Borg)
- how you quantify how many of someone's needs/wants are satisfied.
We just have to find a way that, in the end, leads to most happiness.
If this means if I'd make decisions that would lead to there being 30 billion slightly happy people or 1 billion very happy people, and there's no killing or unnecessarily forcing anyone to do anything they don't want, then yes, I would.
But after they're dead, the happiness/suffering quotient, if we could imagine such a thing, is zero.
And death, considering only very few people want it, is a very very bad thing. Thus, it ends up on the negative side.
Does the utility of someone not living depend on whether that person has lived or not? (if death is a negative utility, and not being born isn't)
If so, what if that person has lived for one microsecond? What if it's undefined whether he lived or not? (this can be true in some QM interpretations )
Well, if you look at perspective of any one of those people being tortured, then it doesn't matter whether there are other people being tortured.
Or do you mean tortured as opposed to leading a more happy life? In that case - when no people are created or destroyed - Averageism is equivalent to Totalism.
Are you saying you'd rather have 1 last person on Earth heavily tortured than 100 last people on Earth only lightly tortured?
I think that "amount of suffering" scales up more quickly than "heaviness of the torture". I.e. you could do something that you would intuitively consider only a little bit worse, and the tortured person would have 100 times as much negative utility.
I also think lack of pain is usually much more important than pleasure, but I'm not a negative utilitarian in that I say lack of pain is the *only* important thing. If I had to stick a pin into my thumb to have eternal happiness after death, I would do that.
Again, we don't know anything about aliens in Andromeda. Heck, they're imaginary, as far as we know. Nothing we do affects them. Nothing they do affects us. They might be happy. They might be unhappy. We don't know. Therefore, it's handiest to treat them as non-existant, because that's the only way we'll get anything done. Not getting anything done is surefire way to have less happiness on Earth.
So. Now we should be making decisions which would leave to finite amount of happiness, in amount of time we are not sure of, which might or might not lead to preferred results, and we should sacrifice our own happiness to do so? Hot damn, how utilitarian!
Comment
-
Those are examples, not a definition (though I should probably read the entire thread...)
- how you define one "person" (in the case of, say, the Borg)
I didn't watch Star Trek, so I can't be too sure. Did Borg have just one mind between them, or were there many minds, just somehow... connected?
- how you quantify how many of someone's needs/wants are satisfied.
Killing should be included though if no one knows of it afterward, i.e. if you consider only the direct effects.
So we should consider the average happiness of the dead as well as the living? Or do past preferences count instead of just preferences in the present?
Does the utility of someone not living depend on whether that person has lived or not? (if death is a negative utility, and not being born isn't)
If so, what if that person has lived for one microsecond? What if it's undefined whether he lived or not? (this can be true in some QM interpretations )
Act of dying causes negative utility in immense amounts. After that, there's neither negative or positive utility.
I think that "amount of suffering" scales up more quickly than "heaviness of the torture". I.e. you could do something that you would intuitively consider only a little bit worse, and the tortured person would have 100 times as much negative utility.
Averageism really does say that the presence and happiness of aliens determines what is moral here on Earth.
And the point still stands. There are no beings from Andromeda that we know of. There are no other sentient beings we know of than humans. Therefore, we should base our decisions on what is good for humankind and what isn't.
In that case, nothing we do would do more than an extremely small bit to increase the average happiness in the universe. The colonizing/pleasure-machine approach would, because it would also eliminate all non-Earth suffering. Even a very small probability of influencing this would be more likely to increase average happiness than anything we do here, if the number of aliens compared to humans is large enough.
Also, since progress will take us to pleasure-machine land anyway, what's the rush? Considering the amount of time available, any amount of utility gained, even according to totalism, is minimal."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
Originally posted by Stefu
I didn't watch Star Trek, so I can't be too sure. Did Borg have just one mind between them, or were there many minds, just somehow... connected?I make no bones about my moral support for [terrorist] organizations. - chegitz guevara
For those who aspire to live in a high cost, high tax, big government place, our nation and the world offers plenty of options. Vermont, Canada and Venezuela all offer you the opportunity to live in the socialist, big government paradise you long for. –Senator Rubio
Comment
-
Hmm. Well. Err. If there's basically one collective mind, I guess that utilitarian thing to do would be to keep it happy, ie. satisfy it's preferences. Of course, making sure those preferences don't unnecessarily conflict with other being's preferences.
Of course, since it hasn't been proven that there are any beings with hive mind or that it's even possible, it's all theoretical."Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
"That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world
Comment
-
I did not read the thread so far, but here's what I think:
what happens to those how are left out of that "greatest
number"? well they suffer in some way. can/should you build
a society that causes somebody to suffer? no. so, shouldn't
it be then "greatest good for all"?
another good site for this kind of discussion:
My Words Are Backed With Bad Attitude And VETERAN KNIGHTS!
Comment
Comment