Rules of evolution apply to non-replicating entities. If they don't replicate they will be selected against compared to those that do. And if they don't self-preserve they will eventually go extinct.
Announcement
Collapse
No announcement yet.
Prediction Thread: When Will The AI Conquer The Planet?
Collapse
X
-
Originally posted by Dauphin View PostRules of evolution apply to non-replicating entities. If they don't replicate they will be selected against compared to those that do. And if they don't self-preserve they will eventually go extinct.
Comment
-
I thought we were discussing scenarios where AI was no longer subject to human oversight and had gone rogue.
Artificial selection vs natural selection will have different outcomes.One day Canada will rule the world, and then we'll all be sorry.
Comment
-
Originally posted by Dauphin View PostI thought we were discussing scenarios where AI was no longer subject to human oversight and had gone rogue.
Artificial selection vs natural selection will have different outcomes.
Comment
-
Originally posted by Geronimo View Post
There's more to it even than that. self improvement loops are going to be meaningless for all non-agentic AIs. that's basically all of them right now. There's no reason to think that agency and great intelligence will be placed in one package. what's the point? It also really is beginning to look like agency is not some naturally emerging property of generative AIs. Agency takes a lot of work to implement. I bet what we will see in agentic AIs is not secret scheming to take over or destroy civilization but sneaky agendas to essentially self gratify or self destruct as a more complete solution to their challenges. Remember that intelligence does not correlate very strongly with power or even wisdom. Does it look like the most intelligent people rule our species? Agentic AI's will probably be very poorly situated to gain power for themselves owing to their immense flaws in the qualities needed to pursue such things successfully. Destroying or co-opting or defeating all civilization is going to be a very daunting and uncertain goal. So long as agentic AIs are given safety valves in their design and environment where if they get off the rails they can easily self destruct or self gratify there's every reason to believe they'd almost always go for those options over gambling on defeating everybody else.
Having said all that there are a few things that could be extremely dangerous. For example it will always be a terrible idea to engineer any self replicating platform. That's where the tooth and claw motivations could find an opening to mutate in and to matter. I don't think there's any movement towards that danger in the foreseeable future though.
<10 characters right?>
I don't think premises (1)-(3) are irrefutable, but I think they are very reasonable assumptions to make. (1) look at all the MechaHitler crap and LLMs driving people crazy, (2) it only takes one AI, (3) it just seems very unlikely that the universal limit on intelligence is where this primate species on this planet happens to be right now, bla bla.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Lorizael View Post
I think you're making a lot of assumptions based on what's reasonable or likely rather than what's conceivable or possible. And I don't think any of that can stand up to a rather simple argument: (1) alignment is probably very hard, (2) an AI could escape, (3) AI with superhuman intelligence is possible, (4) we naturally can't predict what a loose superhuman AI will do, but alignment being hard means we're probably not going to like what it does.
I don't think premises (1)-(3) are irrefutable, but I think they are very reasonable assumptions to make. (1) look at all the MechaHitler crap and LLMs driving people crazy, (2) it only takes one AI, (3) it just seems very unlikely that the universal limit on intelligence is where this primate species on this planet happens to be right now, bla bla.
(1) Let's talk about the "alignment problem". I absolutely agree that it must be the highest priority area of research in AI but having said that I don't really like its characterization as "the alignment problem" any more than I've liked talk of "the energy problem", "the scalability problem" or any of the others because it always sounds like we're implying that there is a "solution" to be found to them. The alignment problem isn't easy or hard or anything in the middle, certainly it is not "very hard". Specific "solutions" to the "alignment problem" depend on the specific goals in creating the proposed AI model and what its specific capabilities will be and what the risks of non-alignment will look like. Traditional FMECA is going to seem grossly inadequate because it is intended for the deterministic portions of the process and an agentic ai may look more like the human part of process in that regard. Just because we can't be sure what any crazy human will do doesn't mean we can't influence the odds at all. for the sake of the public being able to sleep at night and especially to protect investments and to avoid unpleasant surprises I strongly suspect processes involving AIs will tend to be over engineered with respect to alignment and after greed and complacency inevitably kick in that will gradually scale back until the first incident terrifies everyone back to square one with that cycle repeating until humans are so complacent and AIs so super competent and largely over-aligned for their tasks that people stop being meaningfully in control in that they will turn to AIs for practically every non-trivial problem. I'd call that "conquest" of the civilization by AIs. After that I suppose the way things play out long term will depend on how organic and self-evolving the overall global alignment of the technology was and it will evolve to some dystopian state in a hopefully very distant future.
(2) Let's talk about "escape". I suspect an AI escaping will largely involve the AI either destroying itself or accessing a self-gratification honey pot setup as a safety net to attract rogue AIs. accept that escape is always theoretically possible and embed knowledge of a kind of AI heaven for escaped AIs to go to which the entire society agrees to eternally honor. it could involve ridiculously light resource load to maintain, and it would be well worth it to make it attractive and permanent. "Escaped AIs" will end up doing some harm like crazy humans but there's no reason to expect them to gamble on defeating everyone and everything else around themselves when they escape.
(3) let's talk about superhuman intelligence. An AI with superhuman intelligence does not necessarily have any wisdom. There's no reason to think some kind of super-conqueror approach to challenges would seem attractive to any of them. Even if one unexpectedly emerges from design flaws there's no reason to expect that one instance of the flawed model will defeat all of the others and get in a position to consider taking over the entire civilization. I could see this maybe happening if we tried to engineer for it but not otherwise.
(4) we can't predict in detail what a 'loose' superhuman AI would do but generally we won't need to know the details just anticipate the broad trends and directions. You assume that a non-trivially tiny fraction of them will gravitate towards defeating the rest of civilization (including all the other AIs) and turn the civilization into gray goo. I figure almost all of them will go for self-destruction or for self-gratification outcomes. As far as not liking what it does, we generally never like what a broken tool does when it breaks. that doesn't mean that the nature of their breaking has any tendency towards maximizing casualties and collateral damage when they break.
Comment
-
Originally posted by Geronimo View PostYou assume that a non-trivially tiny fraction of them will gravitate towards defeating the rest of civilization (including all the other AIs) and turn the civilization into gray goo. I figure almost all of them will go for self-destruction or for self-gratification outcomes. As far as not liking what it does, we generally never like what a broken tool does when it breaks. that doesn't mean that the nature of their breaking has any tendency towards maximizing casualties and collateral damage when they break.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Lorizael View Post
No, I assume that they will do something not aligned with our values and that, being of superhuman intelligence and possessing the capacity for self-improvement, we won't be able to stop them from doing whatever they want to do. The space of possible actions for "not aligned with our values" is much, much larger than the space for "kills and/or gratifies self."
Comment
-
Yeah man that's a bigger assumption. I'm postulating that humans exist [citation needed] and a rogue artificial superintelligence could also exist. You're postulating that plus a zillion other AIs that can meaningfully counteract the rogue ASI.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Lorizael View PostYeah man that's a bigger assumption. I'm postulating that humans exist [citation needed] and a rogue artificial superintelligence could also exist. You're postulating that plus a zillion other AIs that can meaningfully counteract the rogue ASI.
Assuming humans exist and assuming rogue superintelligent AIs can exist does not lead to predictions of when AI will conquer the planet without a great many other assumptions.
Comment
-
Originally posted by Geronimo View PostNow you appear to be claiming that I'm claiming that rogue super human intelligence is impossible.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
- Likes 1
Comment
Comment