Announcement

Collapse
No announcement yet.

Prove Stefu your ideal society would result to greatest good for greatest number

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    I propose that everyone gets communication equipment implanted in their heads. That way, everyone would be able to access Apolyton 24 hours a day (even while sleeping) and thus, everyone would obviously be happier
    I'm building a wagon! On some other part of the internets, obviously (but not that other site).

    Comment


    • #92
      My ideal society would be focused entirely on the survival of the human race. Probably utilitarian most of the time, to keep people from self-destructing, but they're not getting lollipops if there's an asteroid headed for Earth.

      Nuclear-armed nations are a threat to human survival. Solution: Erase nations.

      Economically, pragmatic.

      Comment


      • #93
        Hmph. I say you're all looking at it the wrong way.

        From a utilitarian perspective, the ideal society would mean:
        All the matter of the universe has been converted into gigantic and optimally efficient (quantum?) computers. On these computers, minds are simulated in very happy situations. Or, you could just "stimulate their pleasure centers" directly.

        This means I have all of you beaten by a factor of maybe 10^50 or 10^100 or infinity (this depends on physics).

        It also follows that for a utilitarian, this goal outweighs all other considerations if there is a non-zero probability that you're able to influence it (and there is).

        I'm no longer a utilitarian though. I've never seen any proof of utilitarianism. This bothers me. Though all other moral systems I've seen suck even more. And most of the arguments used against utilitarianism are ridiculous. ("utilitarianism says X is moral. X is immoral according to my moral system. Therefore, utilitarianism is wrong")

        Comment


        • #94
          That scenario of yours would take long time to complete, and we can't predict what calamities could occur while we're trying to do it. Therefore, it's more reasonable to just try to make society a better place.

          And even if you are striving for your scenario to happen, it doesn't stop you from applying the utilitarian principles to smaller things in life.
          "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
          "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

          Comment


          • #95
            Originally posted by Stefu
            That scenario of yours would take long time to complete, and we can't predict what calamities could occur while we're trying to do it. Therefore, it's more reasonable to just try to make society a better place.
            This doesn't follow. You're correct we can't predict what calamities will occur, but that doesn't mean we can't influence this scenario at all. A good place to start would be preventing the calamities that might occur.

            Besides, even if it takes a long time to complete, it shouldn't take too long to start the process, and once the process is underway most of the danger is gone.

            And even if you are striving for your scenario to happen, it doesn't stop you from applying the utilitarian principles to smaller things in life.
            You should apply them (almost) only in as far as it makes the scenario more likely. Again: 50 or 100 or an infinity of orders of magnitude (or something somewhere in between of course). Making society a better place should be irrelevant to a utilitarian unless "better" only means "more likely to colonize the universe and convert it to happiness-generating material". Present and near-future happiness is irrelevant because there's, relatively speaking, almost nothing of it.

            I say most or all "folk morality" comes from lack of intuition for very large numbers

            Comment


            • #96
              You should apply them (almost) only in as far as it makes the scenario more likely. Again: 50 or 100 or an infinity of orders of magnitude (or something somewhere in between of course). Making society a better place should be irrelevant to a utilitarian unless "better" only means "more likely to colonize the universe and convert it to happiness-generating material".
              Well, first of all, what is the deal with colonizing the whole universe to turn it into happiness-generating material, anyway? There's clearly a limit to how happy a person can be, even if his happiness centers are constantly stimulated. And maximizing happiness doesn't mean that there should be as many happy people around as possible, but that average person should be as happy as possible.

              Present and near-future happiness is irrelevant because there's, relatively speaking, almost nothing of it.
              Oh, it's never irrelevant. For one thing, it's all we got at present, considering that it's going to take a long time after our death that this project is finished. Also, a little bit more happiness is *always* superior to no more happiness at all, even if it is just a little bit. And it wouldn't be too utilitarian to blow away this generation's happiness, and those of many other generations, just to reach for some utopized goal that would have a million ways of possibly going wrong.
              "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
              "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

              Comment


              • #97
                [QUOTE] Originally posted by Stefu
                Well, first of all, what is the deal with colonizing the whole universe to turn it into happiness-generating material, anyway? There's clearly a limit to how happy a person can be, even if his happiness centers are constantly stimulated.[quote]

                How do you know that? How do you define happiness? Is happiness the same as pleasure minus pain or something else?

                And maximizing happiness doesn't mean that there should be as many happy people around as possible, but that average person should be as happy as possible.
                I see.
                In that case, I'm not sure you're wrong Though you would then have to consider all the aliens on other planets that screw up the average, so that colonization would probably outweigh the rest anyway.
                But that's a weird freakish watered-down sissy version of utilitarianism, not worthy of the name. Not the real thing. Real utilitarianism means maximizing total, not average happiness.
                Maximizing average happiness is a silly idea. For one thing, it would mean that in some situations, it would be moral to kill billions of happy people as long as the few people that were left were on average a little happier
                It would mean that a world with 10^1000 perfectly happy people is just as valuable as a world with 1 perfectly happy person. And that a world with 10^1000 people being constantly tortured is just as bad as a world with only one person being constantly tortured. I.e. if one person was already being tortured, a mad scientist could create billions of other people in his lab and torture them a little less than that one person, and it wouldn't be immoral.
                It would mean that people's lives don't have any moral value if their happiness happens to be below average. And how do you figure out that average? You'd need to know the average happiness of the *entire multiverse*. The morality of decisions that only affect, say, planet Earth or your family, would depend (at least in some situations) on how the aliens in the Andromeda galaxy were feeling. Or if there exist parallel worlds, the aliens in the Andromeda galaxy in every parallel world. Or if they don't count, your moral thinking is racist. Average-utilitarianism is a non-local theory!
                For another, taking the average requires that you divide by the number of people which is well-defined in our current situation but can't be well-defined in more extreme situations - AI minds could half-overlap, and there would be no non-arbitrary way to count them.
                You should recognize that when you take average happiness, instead of looking at the amount of happiness you're looking at the amount of happiness that happens to be generated in a certain location - in this case inside human skulls.
                (I'm assuming you're integrating the average happiness at each moment over time?)

                Oh, it's never irrelevant. For one thing, it's all we got at present, considering that it's going to take a long time after our death that this project is finished.
                Maybe. Are you saying you can't affect anything that happens long after your death? Why not?

                Also, a little bit more happiness is *always* superior to no more happiness at all, even if it is just a little bit.
                And a very very little extra chance of achieving the above mentioned scenario is superior to that little bit more happiness you get when concentrating on the present.

                And it wouldn't be too utilitarian to blow away this generation's happiness, and those of many other generations, just to reach for some utopized goal that would have a million ways of possibly going wrong.
                Even if you would need to blow away this generation's happiness AND that of many other generations and even if it did have a million ways of going wrong, it would still be the utilitarian thing to do. Do the math, it works out nicely. A real utilitarian should not be afraid of doing the math

                Comment


                • #98
                  How do you know that? How do you define happiness? Is happiness the same as pleasure minus pain or something else?
                  I've already defined happiness in this thread: Happiness is having your want and needs satisfied.

                  But that's a weird freakish watered-down sissy version of utilitarianism, not worthy of the name. Not the real thing. Real utilitarianism means maximizing total, not average happiness.
                  Check http://www.ianmontgomerie.com/manife...#MAJORVERSIONS and the subsection "Aggregation". Basically, there are differing ideas, and I think that maximizing average happiness is better. For one thing, you would be saying that it's better for world to have 30 billion people who are so-and-so than having 1 billion people who are really happy.

                  For one thing, it would mean that in some situations, it would be moral to kill billions of happy people as long as the few people that were left were on average a little happier
                  Killing people tends to make them *most* unhappy.

                  I.e. if one person was already being tortured, a mad scientist could create billions of other people in his lab and torture them a little less than that one person, and it wouldn't be immoral.
                  The fact he's torturing that one person alone makes it pretty damn immoral.

                  It would mean that people's lives don't have any moral value if their happiness happens to be below average. And how do you figure out that average? You'd need to know the average happiness of the *entire multiverse*. The morality of decisions that only affect, say, planet Earth or your family, would depend (at least in some situations) on how the aliens in the Andromeda galaxy were feeling. Or if there exist parallel worlds, the aliens in the Andromeda galaxy in every parallel world. Or if they don't count, your moral thinking is racist. Average-utilitarianism is a non-local theory!
                  For another, taking the average requires that you divide by the number of people which is well-defined in our current situation but can't be well-defined in more extreme situations - AI minds could half-overlap, and there would be no non-arbitrary way to count them.
                  Well, first of all, since we don't know about aliens in Andromeda and our decisions can't affect them, we might as well treat them as non-existing.

                  And generally, there are only so many people any given decision touches upon. If that decision increases the happiness among those people, then it's a good decision. If it decreases the happiness, or doesn't increase happiness as much as some other decision would, it's a bad decision. Simple, eh?

                  You should recognize that when you take average happiness, instead of looking at the amount of happiness you're looking at the amount of happiness that happens to be generated in a certain location - in this case inside human skulls.
                  Yep. Happiness isn't worthy in itself. It's only worthy as it is connected to people. People want to be happy, therefore we should make them happy.

                  Maybe. Are you saying you can't affect anything that happens long after your death? Why not?
                  Well, basically, thanks to every other decision made effecting everything else, in time your decision might as well have achieved nothing or achieved the exact opposite of what you wanted.

                  And a very very little extra chance of achieving the above mentioned scenario is superior to that little bit more happiness you get when concentrating on the present.
                  Again, there's only a finite amount of happiness possible for one person to have.
                  "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
                  "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

                  Comment


                  • #99
                    Originally posted by Stefu
                    I've already defined happiness in this thread: Happiness is having your want and needs satisfied.
                    How do you define wants and needs? How do you measure to what degree someone has his wants and needs satisfied? How do you know any person's wants and needs are finite? What if most of someone's wants and needs have been satisfied but he thinks they haven't? Do you agree with the "expressed" or "informed" preferences as defined in the FAQ?
                    And would someone who has no desires at all constantly be extremely happy?

                    For one thing, you would be saying that it's better for world to have 30 billion people who are so-and-so than having 1 billion people who are really happy.
                    Depends on what you mean by "so-and-so". If "so-and-so" means happier than the "zero-point" (and more than 1/30 as far beyond this "zero-point" as "really happy"), then I would indeed be saying that.
                    This doesn't seem anywhere near as counter-intuitive as the examples I gave what you must believe.
                    To turn this around, you would deny 29 billion people their (slightly happy) lives if it meant that the remaining 1 billion were 30 times as happy (whatever that means, if happy means satisfaction of desires).

                    Killing people tends to make them *most* unhappy.
                    Only for a very short time, if you do it the right way.
                    Point is, killing people painlessly wouldn't even be immoral under Averageism, if those people are below average happiness. And average happiness depends on the happiness of aliens in Andromeda.

                    The fact he's torturing that one person alone makes it pretty damn immoral.
                    Who says he is torturing that person? I'm just saying that, if someone else is torturing that person, it would not be immoral for him to torture billions of other people, because it wouldn't affect the average.

                    Well, first of all, since we don't know about aliens in Andromeda and our decisions can't affect them, we might as well treat them as non-existing.
                    Wrong.
                    Let's say the objective is to maximize average happiness in the universe.
                    Let's say the average happiness on Earth is 50.
                    Let's say you are in doubt whether to have a child, which you know will have happiness 60.
                    Let's say there are 600 billion aliens in the Andromeda galaxy, and no sentient beings anywhere but Earth and Andromeda.
                    If the average happiness of Andromedans is 80 (maybe the weather is really nice there), then the average happiness in the universe is 79,7 (I think). Having a child will decrease the happiness of the average sentient being.
                    If the average happiness of Andromedans is 20, then the average happiness in the universe is 20,3 (I think). Having a child will increase the happiness of the average sentient being.

                    Thus, whether you should have a child or not depends on the happiness of aliens in Andromeda.

                    This is absurd.

                    And generally, there are only so many people any given decision touches upon.
                    No, if you take the colonizing-etc.-idea then it's clear that any decision can affect infinitely many people.

                    If that decision increases the happiness among those people, then it's a good decision. If it decreases the happiness, or doesn't increase happiness as much as some other decision would, it's a bad decision. Simple, eh?
                    Yes, very simple, but you haven't considered what happens if people are created or destroyed. In that case Averageism and Totalism are no longer equivalent.

                    Well, basically, thanks to every other decision made effecting everything else, in time your decision might as well have achieved nothing or achieved the exact opposite of what you wanted.
                    Maybe. Probably not. When you're not sure of the consequences, make an estimate.

                    Comment


                    • How do you define wants and needs?
                      Again, I've already defined these earlier in thread. Most every person has need to continue living and not to be inconvenienced. These needs encompass need to eat, need to sleep, need to have shelter and so on. Then, some people have more specified needs - for instance, some person might be happy when spanked. In a relatively free society, people will find a way to satisfy as many of these needs as possible.

                      How do you know any person's wants and needs are finite?
                      Oh, they probably aren't. However, happiness achieved by being hooked up to pleasure machine is.

                      Do you agree with the "expressed" or "informed" preferences as defined in the FAQ?
                      Well, I'm on the fence. On the other hand, some people can be conditioned to want things that are harmful for them. And on the other hand, if person can find joy in things that are harmful to them (but don't harm others), should we stop it?

                      We just have to find a way that, in the end, leads to most happiness.

                      And would someone who has no desires at all constantly be extremely happy?
                      There isn't such person who has no desires at all. We all have desire to eat, to sleep, and so on. If there was a person whose desires *truly* were only limited to basic things, and he could satisfy all these needs, them yes, he's very very happy. Good luck finding such a person, though.

                      To turn this around, you would deny 29 billion people their (slightly happy) lives if it meant that the remaining 1 billion were 30 times as happy (whatever that means, if happy means satisfaction of desires).
                      If this means if I'd make decisions that would lead to there being 30 billion slightly happy people or 1 billion very happy people, and there's no killing or unnecessarily forcing anyone to do anything they don't want, then yes, I would.

                      Only for a very short time, if you do it the right way.
                      But after they're dead, the happiness/suffering quotient, if we could imagine such a thing, is zero. Assuming no afterlife, of course. And death, considering only very few people want it, is a very very bad thing. Thus, it ends up on the negative side.

                      Point is, killing people painlessly wouldn't even be immoral under Averageism, if those people are below average happiness.
                      You are sounding like an anti-utilitarian trying to find a way to debunk utilitarianism.

                      Of course killing people would be immoral. Not only because they don't want it, but because a society which kills people because of such things would be so twisted merely living in it would make you unhappy.

                      I'm just saying that, if someone else is torturing that person, it would not be immoral for him to torture billions of other people, because it wouldn't affect the average.
                      Well, if you look at perspective of any one of those people being tortured, then it doesn't matter whether there are other people being tortured.

                      Are you saying you'd rather have 1 last person on Earth heavily tortured than 100 last people on Earth only lightly tortured?

                      Wrong.
                      Let's say the objective is to maximize average happiness in the universe.
                      Let's say the average happiness on Earth is 50.
                      Let's say you are in doubt whether to have a child, which you know will have happiness 60.
                      Let's say there are 600 billion aliens in the Andromeda galaxy, and no sentient beings anywhere but Earth and Andromeda.
                      If the average happiness of Andromedans is 80 (maybe the weather is really nice there), then the average happiness in the universe is 79,7 (I think). Having a child will decrease the happiness of the average sentient being.
                      If the average happiness of Andromedans is 20, then the average happiness in the universe is 20,3 (I think). Having a child will increase the happiness of the average sentient being.

                      Thus, whether you should have a child or not depends on the happiness of aliens in Andromeda.

                      This is absurd.
                      Again, we don't know anything about aliens in Andromeda. Heck, they're imaginary, as far as we know. Nothing we do affects them. Nothing they do affects us. They might be happy. They might be unhappy. We don't know. Therefore, it's handiest to treat them as non-existant, because that's the only way we'll get anything done. Not getting anything done is surefire way to have less happiness on Earth.

                      Maybe. Probably not. When you're not sure of the consequences, make an estimate.
                      So. Now we should be making decisions which would leave to finite amount of happiness, in amount of time we are not sure of, which might or might not lead to preferred results, and we should sacrifice our own happiness to do so? Hot damn, how utilitarian!
                      "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
                      "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

                      Comment


                      • Originally posted by Stefu
                        Again, I've already defined these earlier in thread. Most every person has need to (...)
                        Those are examples, not a definition (though I should probably read the entire thread...)

                        I would still like to know:
                        - how you define one "person" (in the case of, say, the Borg)
                        - how you quantify how many of someone's needs/wants are satisfied.

                        We just have to find a way that, in the end, leads to most happiness.
                        Circular. What we were trying to do is define want/need/desire/preference (on which the definition of happiness depends), instead of thinking of a strategy to maximize an already defined happiness.

                        If this means if I'd make decisions that would lead to there being 30 billion slightly happy people or 1 billion very happy people, and there's no killing or unnecessarily forcing anyone to do anything they don't want, then yes, I would.
                        I suppose this is a matter of intuition and I can't convince you on this. Killing should be included though if no one knows of it afterward, i.e. if you consider only the direct effects.

                        But after they're dead, the happiness/suffering quotient, if we could imagine such a thing, is zero.
                        The happiness/suffering quotient is 0/0, which is undefined. The one that's important is the difference between amount of happiness and suffering.

                        And death, considering only very few people want it, is a very very bad thing. Thus, it ends up on the negative side.
                        So we should consider the average happiness of the dead as well as the living? Or do past preferences count instead of just preferences in the present?
                        Does the utility of someone not living depend on whether that person has lived or not? (if death is a negative utility, and not being born isn't)
                        If so, what if that person has lived for one microsecond? What if it's undefined whether he lived or not? (this can be true in some QM interpretations )

                        Well, if you look at perspective of any one of those people being tortured, then it doesn't matter whether there are other people being tortured.
                        So how do you justify looking at the average instead of total utility, when Averageism says it does matter whether there are other people being tortured?
                        Or do you mean tortured as opposed to leading a more happy life? In that case - when no people are created or destroyed - Averageism is equivalent to Totalism.

                        Are you saying you'd rather have 1 last person on Earth heavily tortured than 100 last people on Earth only lightly tortured?
                        Yes and no
                        I think that "amount of suffering" scales up more quickly than "heaviness of the torture". I.e. you could do something that you would intuitively consider only a little bit worse, and the tortured person would have 100 times as much negative utility.
                        I also think lack of pain is usually much more important than pleasure, but I'm not a negative utilitarian in that I say lack of pain is the *only* important thing. If I had to stick a pin into my thumb to have eternal happiness after death, I would do that.

                        Again, we don't know anything about aliens in Andromeda. Heck, they're imaginary, as far as we know. Nothing we do affects them. Nothing they do affects us. They might be happy. They might be unhappy. We don't know. Therefore, it's handiest to treat them as non-existant, because that's the only way we'll get anything done. Not getting anything done is surefire way to have less happiness on Earth.
                        I agree completely, but you're not arguing against me, you're arguing against Averageism. Averageism really does say that the presence and happiness of aliens determines what is moral here on Earth. You seem to agree that this is ridiculous, therefore you should abandon Averageism and become a Totalist. Or something in between, but that would not be an elegant theory

                        So. Now we should be making decisions which would leave to finite amount of happiness, in amount of time we are not sure of, which might or might not lead to preferred results, and we should sacrifice our own happiness to do so? Hot damn, how utilitarian!
                        Under Averageism I can at least understand this point, but I think that even under Averageism this sort of strategy is correct if there are many more aliens than humans in the universe. In that case, nothing we do would do more than an extremely small bit to increase the average happiness in the universe. The colonizing/pleasure-machine approach would, because it would also eliminate all non-Earth suffering. Even a very small probability of influencing this would be more likely to increase average happiness than anything we do here, if the number of aliens compared to humans is large enough.

                        Comment


                        • Those are examples, not a definition (though I should probably read the entire thread...)
                          Want is a... want. Considering want is a pretty basic concept, that's really as good as it can be defined.

                          - how you define one "person" (in the case of, say, the Borg)
                          We aren't straying too far away from realistic scenarios, are we.

                          I didn't watch Star Trek, so I can't be too sure. Did Borg have just one mind between them, or were there many minds, just somehow... connected?

                          - how you quantify how many of someone's needs/wants are satisfied.
                          Well, that's one of little problems each application of utilitarianism has to deal with. Just generally go with what seems to produce the best result.

                          Killing should be included though if no one knows of it afterward, i.e. if you consider only the direct effects.
                          Well, first of all, there almost always are direct effects, and as I've said many many times, since almost every person wants to live, killing them would go against this preference.

                          So we should consider the average happiness of the dead as well as the living? Or do past preferences count instead of just preferences in the present?
                          Does the utility of someone not living depend on whether that person has lived or not? (if death is a negative utility, and not being born isn't)
                          If so, what if that person has lived for one microsecond? What if it's undefined whether he lived or not? (this can be true in some QM interpretations )
                          I'm not entirely getting your point here.

                          Act of dying causes negative utility in immense amounts. After that, there's neither negative or positive utility.

                          I think that "amount of suffering" scales up more quickly than "heaviness of the torture". I.e. you could do something that you would intuitively consider only a little bit worse, and the tortured person would have 100 times as much negative utility.
                          Nitpicking. I meant whether you'd rather have 1 person with acts done to him causing negative utility of 99 than 100 people with acts done causing negative utility of 1.

                          Averageism really does say that the presence and happiness of aliens determines what is moral here on Earth.
                          You're creating a blatant strawman here. There obviously are averageists, since it's mentioned as an option to totalism. I've never heard of any utilitarian speaking about basing acts on what some imaginary creatures do, so there can't be too many of them supporting this. Therefore, averageism doesn't depend on any kind of imaginary beings. And therefore, I can be an averageist. You're arguing against your own creation, and those usually are really easy to beat.

                          And the point still stands. There are no beings from Andromeda that we know of. There are no other sentient beings we know of than humans. Therefore, we should base our decisions on what is good for humankind and what isn't.

                          In that case, nothing we do would do more than an extremely small bit to increase the average happiness in the universe. The colonizing/pleasure-machine approach would, because it would also eliminate all non-Earth suffering. Even a very small probability of influencing this would be more likely to increase average happiness than anything we do here, if the number of aliens compared to humans is large enough.
                          Consider this. AFAIK, Sun is comparatively young star. Our civilization is also comparatively young - mere 10,000 years old. I believe that whether we know it or not, we're progressing to more and more utilitarian direction - many of the principles and laws guiding our lives are utilitarian, or at least beneficial. Also, we already see signs of pleasure machines being a possibility - heck, even that we can have a rudimentary image in our minds of what it'd be like tells us something. So, we might find all civilizations older than us already hooked up to pleasure machines. (That'd explain why no-one has visited us.)

                          Also, since progress will take us to pleasure-machine land anyway, what's the rush? Considering the amount of time available, any amount of utility gained, even according to totalism, is minimal.
                          "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
                          "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

                          Comment


                          • Originally posted by Stefu
                            I didn't watch Star Trek, so I can't be too sure. Did Borg have just one mind between them, or were there many minds, just somehow... connected?
                            They were a hive mind. Many voices speaking as one.
                            I make no bones about my moral support for [terrorist] organizations. - chegitz guevara
                            For those who aspire to live in a high cost, high tax, big government place, our nation and the world offers plenty of options. Vermont, Canada and Venezuela all offer you the opportunity to live in the socialist, big government paradise you long for. –Senator Rubio

                            Comment


                            • Hmm. Well. Err. If there's basically one collective mind, I guess that utilitarian thing to do would be to keep it happy, ie. satisfy it's preferences. Of course, making sure those preferences don't unnecessarily conflict with other being's preferences.

                              Of course, since it hasn't been proven that there are any beings with hive mind or that it's even possible, it's all theoretical.
                              "Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self." - Dennis Kucinich, candidate for the U. S. presidency
                              "That’s the future of the Democratic Party: providing Republicans with a number of cute (but not that bright) comfort women." - Adam Yoshida, Canada's gift to the world

                              Comment


                              • I did not read the thread so far, but here's what I think:
                                what happens to those how are left out of that "greatest
                                number"? well they suffer in some way. can/should you build
                                a society that causes somebody to suffer? no. so, shouldn't
                                it be then "greatest good for all"?

                                another good site for this kind of discussion:
                                My Words Are Backed With Bad Attitude And VETERAN KNIGHTS!

                                Comment

                                Working...
                                X