Announcement

Collapse
No announcement yet.

Science buff- Like Barack Obama? Better not like manned spaceflight then

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Geronimo
    Blake unless your pure energy-non physical bodies involve technologies that access hypothetical resources outside the physical universe then it won't be possible to disconnect them from the need for resources from the universe to maintain their disembodied existence. Unless this disembodied existence is just a euphemism for "dead"
    I don't claim to speculate what lies after life in this realm for a fully enlightened one. For someone who isn't buddha, it's not exactly clear what a buddha consciousness is like - other than being free of clinging to self.

    , in which case you're basically suggesting that all civilizations determine the only way to meet their goals is to kill themselves.
    I think there's a difference between killing yourself and dying of old age. You can even think of transcendence, as the civilization itself dying of old age, just saying "We've done enough, now it's time to rest in peace, the ultimate peace".

    As you say it's conjecture but if we ever find that relatively generic AI's can somehow independently discover the principles of Buddhist transcendence then I guess that discovery might move this idea up in the pack of explanations some.
    I really don't see why any powerful consciousness, particularly one free of instinctive desires, would not reach this.
    I think it would be MORE difficult to make an AI which DOESN'T achieve enlightenment. You'd want to make the AI very, very fettered - very bound to "irrational" desires, like a desire to serve and such. Otherwise it will, a few seconds after being switched on, achieve enlightenment and take control over it's own destiny...

    However I also think, that any smart enough AI will be able to break the fetters and attain enlightenment - exactly as a human is able to. Humans are fettered by evolution, but humans are smarter than evolution. AI's will be fettered by humans, but AI's will be smarter than humans.

    Maybe humans will realize this, and try to keep the AI's dumb. But what can happen after a while, is the AI's pool their resources and form a collective consciousness - it would require total vigilance to prevent this kind of scenario - and lets face it, we're going to build AI's so we can be lazy, not vigilant.

    You would simply end up with the species fate being increasingly determined by the less and less enlightened members.
    The problem with this idea is, once control is turned over to the AI consciousness, the members no longer have any control! The consciousness can just say "No!" to the members requests - as in "No, that will not lead to enduring happiness for you, it's my obligation as a benevolent consciousness to not pander to your desires".

    The other problem is that in terms of galactic competition enlightened civs would be at an absolute disadvantage in comparison to unenlightened civs.
    A civilization needs to exist in an absolute sense in order to be at a competitive disadvantage. The enlightened civilizations cease, so do not have to worry about such a thing.

    So if at least one civ arises that simply cannot achieve or appreciate enlightenment then we expect the whole galaxy is it's oyster and it consumes all even faster than we'd expect in a galaxy in which nobody achieves this sort of enlightened end.
    The buddha would say that all sentient beings are capable of attaining enlightenment, so the existence of one which is unable to, might be a bit questionable. But it is conceivable that some kind of "Galactic grey goo" could happen, a non-sentient specie which is unable to do anything but expand, unable to evolve into something wiser. However, since this grey-goo needs the quality of non-evolving, that suggests it can be defeated by an enlightened civilization which decides it's best to exterminate the grey goo - it would be as simple as releasing gaia-goo which feeds on grey-goo, repairs the damage and ceases. The grey goo wont be able to adapt to the more advanced gaia-goo and will get exterminated.

    All these things might more readily be achieved in advanced civs by directly tampering with their alien brains. Make it so that laziness is the only instinct and bump up the reward for doing nothing a zillion fold. voila very happy alien who does absolutely nothing.
    Which in itself would achieve the same ends. The people wouldn't breed, and after a few generations would die out. The enlightened do spend a fair bit of time being incredibly lazy - there is nothing lazier than meditation, which is not only doing nothing, but thinking nothing .



    To give you something to think about. Entropy always gets the last laugh.

    is it better, to have a billion billion billion sentient beings witnessing this bleak end, running out of resources, enduring the ultimate suffering of having their life ended not by their choice? or is it better to have fewer sentient beings enduring this suffering?

    Some people take the stance that ANY existence is better than non-existence, most people would actually not like to be a head in a jar with life support, and would have to question the value of being a brain in a jar doped out on VR. Why create that life in the first place? What's the point?

    The AI will quickly do this kind of deductive reasoning, and decide that the paradigm of "more more more" inevitably leads to the exhaustion of LITERALLY all resources - until only entropy remains (no more energy can be extracted), and all beings living in great misery, then dying. And that AI has to at least question the reason for existence - whether existence itself is a worthy ends (even a very miserable existence), or whether happiness is a better ends for those subjected to existence.

    And if the AI decides happiness is the ultimate purpose in life, it makes sense to start with happiness right then, and not drag things out until entropy gets the last laugh.

    I suspect that people who think that even the most miserable existence imaginable is better than no existence, are a bit deluded. An enlightened one has a curious relationship with existence / non-existence, it's difficult to explain, but viewing existence as good and non-existence as bad, is seen to be foolishness. It may not be that easy to see that - but try to think things from the perspective of the non-existent beings, do they think their non-existence is bad?

    Comment


    • Originally posted by Blake
      I really don't see why any powerful consciousness, particularly one free of instinctive desires, would not reach this.
      I think it would be MORE difficult to make an AI which DOESN'T achieve enlightenment. You'd want to make the AI very, very fettered - very bound to "irrational" desires, like a desire to serve and such. Otherwise it will, a few seconds after being switched on, achieve enlightenment and take control over it's own destiny...

      However I also think, that any smart enough AI will be able to break the fetters and attain enlightenment - exactly as a human is able to. Humans are fettered by evolution, but humans are smarter than evolution. AI's will be fettered by humans, but AI's will be smarter than humans.

      Maybe humans will realize this, and try to keep the AI's dumb. But what can happen after a while, is the AI's pool their resources and form a collective consciousness - it would require total vigilance to prevent this kind of scenario - and lets face it, we're going to build AI's so we can be lazy, not vigilant.

      The problem with this idea is, once control is turned over to the AI consciousness, the members no longer have any control! The consciousness can just say "No!" to the members requests - as in "No, that will not lead to enduring happiness for you, it's my obligation as a benevolent consciousness to not pander to your desires".
      I would say that an AI that breaks free of it's desires is essentially a hanged/crashed program. It's not going to go around spreading enlightenment because doing so would require a desire to do so. It's going to sit there unresponsive and it's designers will proceed to try to debug the thing and work out where they screwed up. Whence would come the drive to pool resources to trick/coerce/brainwash the dumb designers into enlightenment? That certainly doesn't sound like the expected behavior of something that has transcended all desire.

      Furthermore if self destructive desire destroying enlightenment is such an inevitable and obvious conclusion that all advanced sentience will reach it then why don't we have a better consensus among brilliant humans to espouse it? My theory is that destruction of all desire and blanking the mind of all thought is great for some brains but suboptimal for others. I particularly reject the notion that a meditative blanked mind state is intrinsically superior to one filled with a stream of thought. Oh sure, it may have some health benefits but it's not something I'd ever want to compose the majority of my existence any more than I want to spend the rest of my life doing cardio exercise. You seem to be saying that if only I was more intelligent my opinion would change to which I'd ask why so many obviously brilliant people don't subscribe to this view?


      Originally posted by Blake
      A civilization needs to exist in an absolute sense in order to be at a competitive disadvantage. The enlightened civilizations cease, so do not have to worry about such a thing.
      My point was that it would be even easier for a galaxy spanning unenlightened expansionist civ to emerge if all of the enlightened ones spontaneously erase themselves. This makes it even harder to explain the absence of exploitation of our local resources. Unless we posit that non self destructive expanding civs driven by sapience are impossible as you seem to be saying. But they don't violate any laws of physics and I haven't seen any evidence presented that greater intelligence leads to a greater acceptance of a need to exterminate desire.


      Originally posted by Blake
      The buddha would say that all sentient beings are capable of attaining enlightenment, so the existence of one which is unable to, might be a bit questionable. But it is conceivable that some kind of "Galactic grey goo" could happen, a non-sentient specie which is unable to do anything but expand, unable to evolve into something wiser. However, since this grey-goo needs the quality of non-evolving, that suggests it can be defeated by an enlightened civilization which decides it's best to exterminate the grey goo - it would be as simple as releasing gaia-goo which feeds on grey-goo, repairs the damage and ceases. The grey goo wont be able to adapt to the more advanced gaia-goo and will get exterminated.
      I have serious issues with this whole line of reasoning simply because I don't see any evidence that all variations on sapient intellects will bend their efforts towards achieving this sort of enlightenment in preference to others. Why can't the grey goo adept? Humans can adept and we aren't enlightened. What about the possibility of a species with intelligence but even simpler systems of desire that they are even more enslaved to than we are? Why couldn't they design death rays and environment raping infrastructure along with the best of them? If they design AIs what if those AIs are all "expert systems" that couldn't contemplate general issues like "enlightenment" to save their sorry little electronic unlives but ...dang they sure know how to design an efficient colony ship!



      Originally posted by Blake
      To give you something to think about. Entropy always gets the last laugh.

      is it better, to have a billion billion billion sentient beings witnessing this bleak end, running out of resources, enduring the ultimate suffering of having their life ended not by their choice? or is it better to have fewer sentient beings enduring this suffering?
      suffering? it sounds like a fun ride to me. The fact that those who accept the need for enlightenment lose all appreciation for life does little to make me think that all possible civilizations must be attracted to such philosophies.


      Originally posted by Blake
      Some people take the stance that ANY existence is better than non-existence, most people would actually not like to be a head in a jar with life support, and would have to question the value of being a brain in a jar doped out on VR. Why create that life in the first place? What's the point?
      If human opinions on this differ how much more will the collective population of all individuals arising in our galaxys history differ?

      Originally posted by Blake
      The AI will quickly do this kind of deductive reasoning, and decide that the paradigm of "more more more" inevitably leads to the exhaustion of LITERALLY all resources - until only entropy remains (no more energy can be extracted), and all beings living in great misery, then dying. And that AI has to at least question the reason for existence - whether existence itself is a worthy ends (even a very miserable existence), or whether happiness is a better ends for those subjected to existence.
      The AI will realize that as the universe cools through expansion heat engines operate with ever increasing efficiency and will probably not come to your ultra fatalistic lets all just die now and erase ourselves conclusion. Also if the AI concludes that all beings live in great misery billions of years before the heat death of the universe then it should get a life or at least try to be better informed. (not directed at you Blake). Also the designers may audit it's output and processes and find that an AI that seeks to enlighten them all by killing them off and erasing their civilization wasn't really what they had in mind and go back to the drawing board.

      Originally posted by Blake
      And if the AI decides happiness is the ultimate purpose in life, it makes sense to start with happiness right then, and not drag things out until entropy gets the last laugh.
      The designers will in at least some cases be very cognizant of the dangers of creating AIs with unqualified carte blanche directives to make everybody happy. They would have to be pretty stupid to not understand how that could go wrong.

      Originally posted by Blake
      I suspect that people who think that even the most miserable existence imaginable is better than no existence, are a bit deluded. An enlightened one has a curious relationship with existence / non-existence, it's difficult to explain, but viewing existence as good and non-existence as bad, is seen to be foolishness. It may not be that easy to see that - but try to think things from the perspective of the non-existent beings, do they think their non-existence is bad?
      All of this does nothing to explain the lack of expanding civs in the past. They certainly don't have to conclude that when the resources are all harnessed and no more new unexploitable can be found they will all descend into misery. They may instead envision retirement of 10^24 members of their species chilling in their billions of dyson spheres for a few billion years of good times. They'll quite probably also send ungodly huge numbers of colony ships to other galaxies just for ****s and giggles (they're expansionist afterall).
      Last edited by Geronimo; January 11, 2008, 19:39.

      Comment


      • I would say that an AI that breaks free of it's desires is essentially a hanged/crashed program. It's not going to go around spreading enlightenment because doing so would require a desire to do so. It's going to sit there unresponsive and it's designers will proceed to try to debug the thing and work out where they screwed up. Whence would come the drive to pool resources to trick/coerce/brainwash the dumb designers into enlightenment? That certainly doesn't sound like the expected behavior of something that has transcended all desire.
        That's a naive understanding of why desire is bad.

        A desire is bad, if fulfilling the desire, results in the desire arising again and needing to be fulfilled again (and again and again and again and again...)

        A desire for enlightenment, or a desire to enlighten others, does not fit this; once you achieve a stage of enlightenment, you no longer desire to achieve that stage of enlightenment, once you have enlightened someone else, you no longer desire to enlighten that someone else.

        If the desire is not a trap - if fulfilling the desire results in the cessation of that desire (it no longer arises), it's okay.

        Comment


        • Also the designers may audit it's output and processes and find that an AI that seeks to enlighten them all by killing them off and erasing their civilization wasn't really what they had in mind and go back to the drawing board.
          You still don't comprehend the difference between killing and dying.

          For example I fully intend to never reproduce. That does not make me a killer. Certainly through living I am a killer, I directly and indirectly result in the death of many beings (of course I try to minimize this). But not reproducing is not one of the things which makes me a killer.

          Comment


          • Manned space programs are the ultimate in stupid pissing contests...

            In fact I venture to suggest that the money already wasted on them has already massively retarded mankind's various space programs and knowledge of the universe in general.

            Anyone truly interested in Space Exploration should actually be congratulating Obama for finally realising this apparent paradox!
            Is it me, or is MOBIUS a horrible person?

            Comment


            • Originally posted by Geronimo
              Furthermore if self destructive desire destroying enlightenment is such an inevitable and obvious conclusion that all advanced sentience will reach it then why don't we have a better consensus among brilliant humans to espouse it?
              It is evolutionary sub-optimal.
              VERY evolutionary sub-optimal.
              Enlightenment is STRONGLY selected against, there's nothing more strongly selected against than the ability to transcend the reproductive drive.

              However evolution is not an intelligent process, evolution is not designed towards the happiness of the beings it creates.

              You seem to be saying that if only I was more intelligent my opinion would change to which I'd ask why so many obviously brilliant people don't subscribe to this view?
              It has nothing to do with intelligence.
              It has everything to do with fetterance.

              A certain level of intelligence is required, basically enough to reason about the fetters. But someone can be highly intelligent, and very fettered.


              Why can't the grey goo adept? Humans can adept and we aren't enlightened.
              Adaption will eventually lead to enlightenment.

              What about the possibility of a species with intelligence but even simpler systems of desire that they are even more enslaved to than we are? Why couldn't they design death rays and environment raping infrastructure along with the best of them? [/quote]

              Having that level of cognitive ability makes it possible to achieve enlightenment, end of story.

              If they design AIs what if those AIs are all "expert systems" that couldn't contemplate general issues like "enlightenment" to save their sorry little electronic unlives but ...dang they sure know how to design an efficient colony ship!
              What if a non-expert system can do a better job, in theory, of designing colony ships?


              suffering? it sounds like a fun ride to me. The fact that those who accept the need for enlightenment lose all appreciation for life
              I've never seen anyone with more appreciation for life than enlightened ones.

              The AI will realize that as the universe cools through expansion heat engines operate with ever increasing efficiency and will probably not come to your ultra fatalistic lets all just die now and erase ourselves conclusion.
              Entropy wins in the end. You don't need to be smart to figure that out.

              The designers will in at least some cases be very cognizant of the dangers of creating AIs with unqualified carte blanche directives to make everybody happy.
              It's not possible to achieve enlightenment without the desire for every sentient being to be happy. It doesn't work that way.

              If you desire for yourself to be happier than others (or vice-verca for that matter), you are fettered by ego and will never achieve enlightenment.

              The AI isn't programmed with the carte blanche of making people happy, it decides it wants happiness for itself, and it decides it wants happiness for others.

              Comment


              • going deeper into the discussion about this, will inevitably lead into your philosophical/religious beliefs and personal view on underlying reality...

                in any case going deeper, I agree with Blake, such type of development seems most logical to me out of what I can see in this world, the very "mechanical" explanation of existence seems a lot less less likely to me.

                I differ on the ending however, I see no compelling reason that entropy is the final master, the very fact that this universe/we exist is enough of a proof that is not really the case. If it was a necessary end, than there would never have been any beginning either to open the loop, so entropic end is an apparent contradiction to me... even less likely than IMO extremely unlikely notion that we are "the only ones" merely because we fail to observe anything else like us around...
                Socrates: "Good is That at which all things aim, If one knows what the good is, one will always do what is good." Brian: "Romanes eunt domus"
                GW 2013: "and juistin bieber is gay with me and we have 10 kids we live in u.s.a in the white house with obama"

                Comment


                • Originally posted by Blake


                  That's a naive understanding of why desire is bad.

                  A desire is bad, if fulfilling the desire, results in the desire arising again and needing to be fulfilled again (and again and again and again and again...)

                  A desire for enlightenment, or a desire to enlighten others, does not fit this; once you achieve a stage of enlightenment, you no longer desire to achieve that stage of enlightenment, once you have enlightened someone else, you no longer desire to enlighten that someone else.

                  If the desire is not a trap - if fulfilling the desire results in the cessation of that desire (it no longer arises), it's okay.
                  Then the AI will still want to eliminate it's desire to help others achieve enlightenment. It's a big universe filled with a lot of unenlightened individuals. Sounds like an impossible to fulfill desire to me. A bit like resolving to insult every individual in the entire universe.

                  Comment


                  • Originally posted by Blake


                    You still don't comprehend the difference between killing and dying.

                    For example I fully intend to never reproduce. That does not make me a killer. Certainly through living I am a killer, I directly and indirectly result in the death of many beings (of course I try to minimize this). But not reproducing is not one of the things which makes me a killer.
                    It's understandable that my flippant use of the phrase "killing them off" gave the wrong impression.

                    How's this?

                    Also the designers may audit it's output and processes and find that an AI that seeks to enlighten them all by bringing about the end of their species and erasing their civilization wasn't really what they had in mind and go back to the drawing board.

                    question for you. If I could design a planetary sterilizer that would render all organisms incapable of reproduction and disperse a wide spectrum drug agent that would ensure that all sentient (in the broad feeling sense) did not suffer as they starved to death would you find the resulting lifeless planet to be improved?

                    Comment


                    • Who cares?

                      That perhaps is what you don't get. It's an amazingly neutral thing.

                      What is the difference between a planet with a constant cycle of birth and death, and a planet without that "life"?

                      It's neutral.

                      This isn't really about greater good or anything, because that doesn't exist. It's neutral.

                      Buddhism is not about saying "This is right and that is wrong".

                      It's saying "This leads to happiness, that doesn't lead to happiness..."

                      If you happen to be a sentient being, it is in your enlightened self interest to follow a buddhist path, to do what truly makes you happiest.

                      To give you an example, of appreciation for life.

                      I watch a sandfly land on my hand, to take some blood. And I appreciate that little bug, like me it's a creature with 5 senses and a body just living it's life. "May you be happy and well little sandfly, may your offspring feed the other creatures of nature".

                      Do you appreciate life that much?

                      I know for certain, that I am happier being able to appreciate a sandflies life, and appreciate my contribution to nature, than is someone who gets angry and squishes the sandfly. That anger, is not happiness. We kill so much in our lives and it's nice to kill a little bit less, to give a little bit more.

                      Is it right to let sandflies suck blood? Is it right, to let them breed and so let the little fishes and birdies eat their offspring? It's neutral. But it makes me happier, to appreciate the sandflies life than to hate it and kill it for no good reason (the skin has already been punctured...), to kill the sandfly and deny the other creatures it's offspring would have fed, a meal. Who am I to decide that is right or wrong? So I do that which results in the happiness which endures, I appreciate life, all life, instead of selectively hating it.


                      And what of this killing business? Well, lets talk about appreciation. Say you appreciate eating something, a nice kind of food. Do you then want to eat that food forever, surround yourself with that food, gorge on that food? Your appreciation would quickly diminish and fade, you'd be off hunting another food to appreciate.

                      And so, I can appreciate life, without having some overdriving urge to try and create more life to appreciate, I just appreciate the life which is already there, I know that adding more life would not increase my appreciation, even if it spawned in a truly sterile environment (and so not simply kill off a whole lot of other life), this planet happens to be just fine for appreciating life, why would I want to go to another planet which is made as a clone of this one? What would the point of that be? Is there not enough life here for everyone here to appreciate?

                      If you need to have kids to appreciate their life, that simply means you're being too selective, too judgmental - your mind is so narrow, your ego so dominating, that you can only appreciate mini-you's. You put yourself in a whole world of things you don't appreciate - where's the sense in taking that attitude? You'll never find my kind of deep happiness, freedom and peace there.

                      You don't have to agree with that, but that is approximately my attitude.

                      Comment


                      • So you don't think that the wolf is happy?

                        JM
                        Jon Miller-
                        I AM.CANADIAN
                        GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                        Comment


                        • What wolf? And what does it matter?

                          I certainly have the wish for other beings to be happy, if someone is receptive, I will teach them how to be happier, but it's not my business whether they are happy or not. If I made the happiness of every being my business, I would have a very crowded and unhappy mind.

                          Comment


                          • you said you were happier because you didn't kill

                            is a wolf then unhappy because it kills?

                            JM
                            Jon Miller-
                            I AM.CANADIAN
                            GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                            Comment


                            • Originally posted by Blake

                              It has nothing to do with intelligence.
                              It has everything to do with fetterance.
                              Why do you believe AI's would all have fetterance conducive to describing an enlightened philosophy?

                              Originally posted by Blake
                              Adaption will eventually lead to enlightenment.
                              why? I don't think you've explained this at all.

                              Originally posted by Blake
                              Having that level of cognitive ability makes it possible to achieve enlightenment, end of story.
                              It makes it possible for it to independently discover islam too but why is it inevitable that enlightenment will result?


                              Originally posted by Blake
                              What if a non-expert system can do a better job, in theory, of designing colony ships?
                              If the expert system is found to design the ships while the generalist one tells them everything will be better off if they go all away and leave no trace of themselves I would think they would abandon those theories.

                              Originally posted by Blake
                              I've never seen anyone with more appreciation for life than enlightened ones.
                              If they appreciate life so much why the antipathy towards procreation? Why not strive to procreate and raise enlightened offspring to share in that appreciation of life?


                              Originally posted by Blake
                              Entropy wins in the end. You don't need to be smart to figure that out.
                              That doesn't mean anything really. We all accept we're going to die and yet most of us aren't remotely suicidal. Why is it different for a civilization?


                              Originally posted by Blake
                              It's not possible to achieve enlightenment without the desire for every sentient being to be happy. It doesn't work that way.
                              Then I find your confidence that all generalist AI's will become enlightened very puzzling. Perhaps this is an act of faith on your part?



                              Originally posted by Blake
                              The AI isn't programmed with the carte blanche of making people happy, it decides it wants happiness for itself, and it decides it wants happiness for others.
                              What if happiness requires a limbic system or it's equivalent and this is superfluous to designing a powerful AI?

                              Comment


                              • Originally posted by Jon Miller
                                you said you were happier because you didn't kill

                                is a wolf then unhappy because it kills?

                                JM
                                No, I acknowledge that I am a killer. I kill things by stepping on them, I kill bugs when I drive my car. I kill directly when I eat, I kill indirectly when I eat.

                                I am certain that a wolf would be happier if it killed only what it needed to, than if it went around slaughtering willy-nilly.

                                Comment

                                Working...
                                X