I love if you use the AI on Trump's Truth Social, it will fact check pretty much everything he posts as a lie
Announcement
Collapse
No announcement yet.
Prediction Thread: When Will The AI Conquer The Planet?
Collapse
X
-
I got the idea to log in here for the first time in years (or decades?) after reading an article on Dead Internet Theory. Wanted to test if AI cared about long forgotten, nearly deserted old forums. This does seems to be a haven from AI slop and similar 'content'. However, after skimming a couple of threads and the posts made by Apolyton Old timers, it now seems to me ike the complete AI take over can not come fast enough...
Comment
-
Originally posted by Geronimo View PostI think the conquest will be a gradual fuzzy line...Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Lorizael View Post
hmm i think it will be more of a... grey... goo
Comment
-
I just think the alignment problem is extremely hard and we're much more likely to create an artificial superintelligence that just doesn't care about us in precisely the same way we do. I think what's likely isn't that AI will be Terminator/Matrix/IHNMAIMS-style rebellious and malevolent (that, too, seems way too human), but just something with a mindset and goals so completely orthogonal to ours that human survival and flourishing simply don't register or factor in at all.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
Originally posted by Lorizael View PostI just think the alignment problem is extremely hard and we're much more likely to create an artificial superintelligence that just doesn't care about us in precisely the same way we do. I think what's likely isn't that AI will be Terminator/Matrix/IHNMAIMS-style rebellious and malevolent (that, too, seems way too human), but just something with a mindset and goals so completely orthogonal to ours that human survival and flourishing simply don't register or factor in at all.
Comment
-
The problem is the idea of an intelligence explosion. If there's some neat but relatively hard to find trick to exponential self-improvement and an AI learns it, it may not matter what other AIs are around at the time.Click here if you're having trouble sleeping.
"We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld
Comment
-
What’s the definition of improvement?
I may be asking a meaningless question here, but what is more important when it comes to AI intelligence, the software (algorithms and heuristics), the data (that informs the decisions) or the hardware (that defines the computational rate)?
Or put another way, what constraints are there on AI’s ability to grow? And which one is the rate limiting step.One day Canada will rule the world, and then we'll all be sorry.
Comment
-
Originally posted by Lorizael View PostThe problem is the idea of an intelligence explosion. If there's some neat but relatively hard to find trick to exponential self-improvement and an AI learns it, it may not matter what other AIs are around at the time.
Grey goo outcome is theoretically possible but honestly if it was likely we'd all be gray goo already from ET rogue AIs for probably billions of years already. If we are not a simulation then runaway self improving AI loops are probably not trivial to develop. Maybe almost impossible to maintain even. and if we *are* a simulation I'd bet our simulation won't be allowed to evolve towards some trivial gray goo outcome anyway.
Comment
-
Originally posted by Dauphin View PostWhat’s the definition of improvement?
I may be asking a meaningless question here, but what is more important when it comes to AI intelligence, the software (algorithms and heuristics), the data (that informs the decisions) or the hardware (that defines the computational rate)?
Or put another way, what constraints are there on AI’s ability to grow? And which one is the rate limiting step.
Having said all that there are a few things that could be extremely dangerous. For example it will always be a terrible idea to engineer any self replicating platform. That's where the tooth and claw motivations could find an opening to mutate in and to matter. I don't think there's any movement towards that danger in the foreseeable future though.
<10 characters right?>Last edited by Geronimo; August 14, 2025, 11:38. Reason: too--->to is probably not worth risking for one character
Comment
-
Re the first part. Rules of evolution would no doubt apply - self-preservation, replication, etc would become evolutionary pressures - not necessarily intelligence. Who know what may result.One day Canada will rule the world, and then we'll all be sorry.
Comment
-
Originally posted by Dauphin View PostRe the first part. Rules of evolution would no doubt apply - self-preservation, replication, etc would become evolutionary pressures - not necessarily intelligence. Who know what may result.
Comment
Comment