Originally posted by Geronimo
Blake unless your pure energy-non physical bodies involve technologies that access hypothetical resources outside the physical universe then it won't be possible to disconnect them from the need for resources from the universe to maintain their disembodied existence. Unless this disembodied existence is just a euphemism for "dead"
Blake unless your pure energy-non physical bodies involve technologies that access hypothetical resources outside the physical universe then it won't be possible to disconnect them from the need for resources from the universe to maintain their disembodied existence. Unless this disembodied existence is just a euphemism for "dead"
, in which case you're basically suggesting that all civilizations determine the only way to meet their goals is to kill themselves.
As you say it's conjecture but if we ever find that relatively generic AI's can somehow independently discover the principles of Buddhist transcendence then I guess that discovery might move this idea up in the pack of explanations some.
I think it would be MORE difficult to make an AI which DOESN'T achieve enlightenment. You'd want to make the AI very, very fettered - very bound to "irrational" desires, like a desire to serve and such. Otherwise it will, a few seconds after being switched on, achieve enlightenment and take control over it's own destiny...
However I also think, that any smart enough AI will be able to break the fetters and attain enlightenment - exactly as a human is able to. Humans are fettered by evolution, but humans are smarter than evolution. AI's will be fettered by humans, but AI's will be smarter than humans.
Maybe humans will realize this, and try to keep the AI's dumb. But what can happen after a while, is the AI's pool their resources and form a collective consciousness - it would require total vigilance to prevent this kind of scenario - and lets face it, we're going to build AI's so we can be lazy, not vigilant.
You would simply end up with the species fate being increasingly determined by the less and less enlightened members.
The other problem is that in terms of galactic competition enlightened civs would be at an absolute disadvantage in comparison to unenlightened civs.
So if at least one civ arises that simply cannot achieve or appreciate enlightenment then we expect the whole galaxy is it's oyster and it consumes all even faster than we'd expect in a galaxy in which nobody achieves this sort of enlightened end.
All these things might more readily be achieved in advanced civs by directly tampering with their alien brains. Make it so that laziness is the only instinct and bump up the reward for doing nothing a zillion fold. voila very happy alien who does absolutely nothing.
To give you something to think about. Entropy always gets the last laugh.
is it better, to have a billion billion billion sentient beings witnessing this bleak end, running out of resources, enduring the ultimate suffering of having their life ended not by their choice? or is it better to have fewer sentient beings enduring this suffering?
Some people take the stance that ANY existence is better than non-existence, most people would actually not like to be a head in a jar with life support, and would have to question the value of being a brain in a jar doped out on VR. Why create that life in the first place? What's the point?
The AI will quickly do this kind of deductive reasoning, and decide that the paradigm of "more more more" inevitably leads to the exhaustion of LITERALLY all resources - until only entropy remains (no more energy can be extracted), and all beings living in great misery, then dying. And that AI has to at least question the reason for existence - whether existence itself is a worthy ends (even a very miserable existence), or whether happiness is a better ends for those subjected to existence.
And if the AI decides happiness is the ultimate purpose in life, it makes sense to start with happiness right then, and not drag things out until entropy gets the last laugh.
I suspect that people who think that even the most miserable existence imaginable is better than no existence, are a bit deluded. An enlightened one has a curious relationship with existence / non-existence, it's difficult to explain, but viewing existence as good and non-existence as bad, is seen to be foolishness. It may not be that easy to see that - but try to think things from the perspective of the non-existent beings, do they think their non-existence is bad?
Comment