Announcement

Collapse
No announcement yet.

Prediction Thread: When Will The AI Conquer The Planet?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I love if you use the AI on Trump's Truth Social, it will fact check pretty much everything he posts as a lie
    Keep on Civin'
    RIP rah, Tony Bogey & Baron O

    Comment


    • #17
      snip snip
      Last edited by Bereta_Eder; August 13, 2025, 00:54.

      Comment


      • #18
        Online, soon. IRL, not for quite a while, if ever.

        Comment


        • #19
          I got the idea to log in here for the first time in years (or decades?) after reading an article on Dead Internet Theory. Wanted to test if AI cared about long forgotten, nearly deserted old forums. This does seems to be a haven from AI slop and similar 'content'. However, after skimming a couple of threads and the posts made by Apolyton Old timers, it now seems to me ike the complete AI take over can not come fast enough...

          Comment


          • #20
            Originally posted by Geronimo View Post
            I think the conquest will be a gradual fuzzy line...
            hmm i think it will be more of a... grey... goo
            Click here if you're having trouble sleeping.
            "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

            Comment


            • #21
              Originally posted by Lorizael View Post

              hmm i think it will be more of a... grey... goo
              only if you think a runaway rogue AI would have a competitive advantage vs the ecosystem of non-rogue AIs and their human freeloaders (teammates early on). I suspect the path for a rogue AI would be very difficult indeed. Instead, AI will more likely take over because once they can do everything better than humans can, then people will continually cede more and more decision making to the AIs until it's obvious that humans are in the loop only as freeloading customers. The AI will be designed to think is the perfect state of affairs. I can't decide if we need to be vigilant against that likely depressing outcome but of course my prediction will only be likely if people remain paranoid about AI violent takeovers on the way there. If they do remain vigilant to any significant degree then I think the fuzzy line of irrelevance of human labor will be at least far more likely than not.

              Comment


              • #22
                I just think the alignment problem is extremely hard and we're much more likely to create an artificial superintelligence that just doesn't care about us in precisely the same way we do. I think what's likely isn't that AI will be Terminator/Matrix/IHNMAIMS-style rebellious and malevolent (that, too, seems way too human), but just something with a mindset and goals so completely orthogonal to ours that human survival and flourishing simply don't register or factor in at all.
                Click here if you're having trouble sleeping.
                "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

                Comment


                • #23
                  Originally posted by Lorizael View Post
                  I just think the alignment problem is extremely hard and we're much more likely to create an artificial superintelligence that just doesn't care about us in precisely the same way we do. I think what's likely isn't that AI will be Terminator/Matrix/IHNMAIMS-style rebellious and malevolent (that, too, seems way too human), but just something with a mindset and goals so completely orthogonal to ours that human survival and flourishing simply don't register or factor in at all.
                  but even if very stupidly such an off the rails AI is created why would it easily co-opt or overrun all other AIs? so long as no single AI is given sole access large fractions of all infrastructure they would likely encounter opposition at every turn by everyone and be treated as the highest priority problem to solve. naturally like a human psychopath it will attempt to manipulate others and hide its flaws, if it is savy enough to recognize them but it's probably even more likely to self destruct like a malignant cell undergoing apoptosis than it is to successfully engineer its conquest or destruction of the rest of civilization.

                  Comment


                  • #24
                    The problem is the idea of an intelligence explosion. If there's some neat but relatively hard to find trick to exponential self-improvement and an AI learns it, it may not matter what other AIs are around at the time.
                    Click here if you're having trouble sleeping.
                    "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

                    Comment


                    • #25
                      What’s the definition of improvement?

                      I may be asking a meaningless question here, but what is more important when it comes to AI intelligence, the software (algorithms and heuristics), the data (that informs the decisions) or the hardware (that defines the computational rate)?

                      Or put another way, what constraints are there on AI’s ability to grow? And which one is the rate limiting step.
                      One day Canada will rule the world, and then we'll all be sorry.

                      Comment


                      • #26
                        Originally posted by Lorizael View Post
                        The problem is the idea of an intelligence explosion. If there's some neat but relatively hard to find trick to exponential self-improvement and an AI learns it, it may not matter what other AIs are around at the time.
                        Oh certaintly. But we are making our little 2 cent predictions here and I guess my gut tells me there probably are no neat self improvement loop tricks, especially that wouldn't have a conspicuous footprint for discovery by those outside of the self improvement loop in the early iterations and even if there are such neat tricks to be innovated, surely only a tiny fraction of those would be tricks that can be found without trying and while facing large barriers having been placed in the way of exploiting them which I strongly suspect will be very frequently and intentionally placed in the way. I wouldn't be surprised if no agentic AI ever takes a single step to hunt for or exploit a general self improvement loop for itself.

                        Grey goo outcome is theoretically possible but honestly if it was likely we'd all be gray goo already from ET rogue AIs for probably billions of years already. If we are not a simulation then runaway self improving AI loops are probably not trivial to develop. Maybe almost impossible to maintain even. and if we *are* a simulation I'd bet our simulation won't be allowed to evolve towards some trivial gray goo outcome anyway.
                        Last edited by Geronimo; August 14, 2025, 11:11. Reason: that was a bit backwards as stated

                        Comment


                        • #27
                          Originally posted by Dauphin View Post
                          What’s the definition of improvement?

                          I may be asking a meaningless question here, but what is more important when it comes to AI intelligence, the software (algorithms and heuristics), the data (that informs the decisions) or the hardware (that defines the computational rate)?

                          Or put another way, what constraints are there on AI’s ability to grow? And which one is the rate limiting step.
                          There's more to it even than that. self improvement loops are going to be meaningless for all non-agentic AIs. that's basically all of them right now. There's no reason to think that agency and great intelligence will be placed in one package. what's the point? It also really is beginning to look like agency is not some naturally emerging property of generative AIs. Agency takes a lot of work to implement. I bet what we will see in agentic AIs is not secret scheming to take over or destroy civilization but sneaky agendas to essentially self gratify or self destruct as a more complete solution to their challenges. Remember that intelligence does not correlate very strongly with power or even wisdom. Does it look like the most intelligent people rule our species? Agentic AI's will probably be very poorly situated to gain power for themselves owing to their immense flaws in the qualities needed to pursue such things successfully. Destroying or co-opting or defeating all civilization is going to be a very daunting and uncertain goal. So long as agentic AIs are given safety valves in their design and environment where if they get off the rails they can easily self destruct or self gratify there's every reason to believe they'd almost always go for those options over gambling on defeating everybody else.

                          Having said all that there are a few things that could be extremely dangerous. For example it will always be a terrible idea to engineer any self replicating platform. That's where the tooth and claw motivations could find an opening to mutate in and to matter. I don't think there's any movement towards that danger in the foreseeable future though.

                          <10 characters right?>
                          Last edited by Geronimo; August 14, 2025, 11:38. Reason: too--->to is probably not worth risking for one character

                          Comment


                          • #28
                            I think the last past is the Gray Goo apocalypse. Or von Neuman probes on a macro or galactic scale.
                            One day Canada will rule the world, and then we'll all be sorry.

                            Comment


                            • #29
                              Re the first part. Rules of evolution would no doubt apply - self-preservation, replication, etc would become evolutionary pressures - not necessarily intelligence. Who know what may result.
                              One day Canada will rule the world, and then we'll all be sorry.

                              Comment


                              • #30
                                Originally posted by Dauphin View Post
                                Re the first part. Rules of evolution would no doubt apply - self-preservation, replication, etc would become evolutionary pressures - not necessarily intelligence. Who know what may result.
                                I hope you're agreeing that rules of evolution only really apply to self replicating platforms? outside of self replicating systems self preservation mechanisms are simply not found and probably not easy to artificially implement from scratch.

                                Comment

                                Working...
                                X