Announcement

Collapse
No announcement yet.

Do singularity people consider this possibility?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Do singularity people consider this possibility?

    Why do singularity people assume that the creation of an smarter than human AI will result in an intelligence explosion?


    First off we don't have a good idea of how hard it is to create intelligence, also we have no idea how much harder it gets on the upper levels. Is it too hard to imagine the difficulty of creating smarter AI would scale quick enough to kill technological singularity dreams?


    Also what about the limits of intelligence? What if the upper limit of attainable intelligence isn't much better than what humanities few super geniuses have reached? (I'm talking about general intelligence here)
    Modern man calls walking more quickly in the same direction down the same road “change.”
    The world, in the last three hundred years, has not changed except in that sense.
    The simple suggestion of a true change scandalizes and terrifies modern man. -Nicolás Gómez Dávila

  • #2
    I hope that at some point people will stop writing code (software). If there is an AI that is as smart as a human but 1 million times faster then it should have enough time to write bug free optimised code.
    I am so looking forward to ATI hiring a robot to write some decent drivers...
    Quendelie axan!

    Comment


    • #3
      Why do singularity people assume that the creation of an smarter than human AI will result in an intelligence explosion?


      Because if humans become capable of creating an intelligence smarter than humans, then said intelligence will be capable of creating an even smarter intelligence (even if only by epsilon). Ad infinitum. Duh?

      Comment


      • #4
        :drool: I can't wait to find out what AI thinks of us. Keep or Destroy?
        be free

        Comment


        • #5
          And each level up the process should be faster than the previous one, so you get exponential increases over time.

          Plus your question is flawed in several ways I think. First core one is what your definitions of Intelligence and General Intelligence are.

          First off we don't have a good idea of how hard it is to create intelligence
          I think we do have pretty good ideas about that actually. We can already create lots of different levels of intelligence. In some areas AIs can already surpass the best humans eg. Chess. It's an exciting field.

          Also what about the limits of intelligence? What if the upper limit of attainable intelligence isn't much better than what humanities few super geniuses have reached? (I'm talking about general intelligence here)
          Highly intelligent people (again subject to definition), tend to be able to picture or hold many more different variables and problems in their heads simultaneously than 'normal' people. It's easy to see how you can scale up processing power, simultaneous processing and data storage to higher and higher levels. The processes of intelligence are very simple, it's just a question of scale, so why should there be a limit?
          Jon Miller: MikeH speaks the truth
          Jon Miller: MikeH is a shockingly revolting dolt and a masturbatory urine-reeking sideshow freak whose word is as valuable as an aging cow paddy.
          We've got both kinds

          Comment


          • #6
            It's not just scale. You can have an infinite chess pieces and an infinite square tiles. The real question is, can we create intelligence that is not random or meaningless.
            be free

            Comment


            • #7
              Some TV show once defined The Singularity as "The Rapture for nerds." Fair synopsis.
              1011 1100
              Pyrebound--a free online serial fantasy novel

              Comment


              • #8
                Originally posted by loin
                A self-replicating/improving AI would design its replacement twice as quickly and with ten times the bugs of a team of human engineers. The Eschaton will be so riddled with viruses that it will solely dedicate its nigh-infinite processing power towards the accumulation of penis enlargement pills.
                .
                <p style="font-size:1024px">HTML is disabled in signatures </p>

                Comment


                • #9
                  Originally posted by FrostyBoy View Post
                  It's not just scale. You can have an infinite chess pieces and an infinite square tiles. The real question is, can we create intelligence that is not random or meaningless.
                  You can't have infinite pieces or tiles, but that's another discussion.

                  You could have a lot of pieces and a lot of tiles. And finding the best move would still be about running through as many possible moves as you can as fast as you can and evaluating the best one based on some knowledge of the game. So scale up the power and you scale up the 'intelligence'.
                  Jon Miller: MikeH speaks the truth
                  Jon Miller: MikeH is a shockingly revolting dolt and a masturbatory urine-reeking sideshow freak whose word is as valuable as an aging cow paddy.
                  We've got both kinds

                  Comment


                  • #10
                    Maybe the real question is whether human intelligence is meaningful or not, which is back to a debate from another thread a few weeks ago.
                    Jon Miller: MikeH speaks the truth
                    Jon Miller: MikeH is a shockingly revolting dolt and a masturbatory urine-reeking sideshow freak whose word is as valuable as an aging cow paddy.
                    We've got both kinds

                    Comment


                    • #11
                      Human 'intelligence' is fairly random. Driven much more by primal instinct, hormones and conditioning than it is by rationalism.
                      Jon Miller: MikeH speaks the truth
                      Jon Miller: MikeH is a shockingly revolting dolt and a masturbatory urine-reeking sideshow freak whose word is as valuable as an aging cow paddy.
                      We've got both kinds

                      Comment


                      • #12
                        One of the major problems with the whole singularity business is that the brain is relatively robust while a digital computer is relatively brittle (even leaving aside the undecidability of bug removal). Brain cells die and synapses misfire all the time, but we still manage to more or less function. On the other hand a single transient memory fault can crash a supercomputer.

                        RE the difference between human intelligence and computational intelligence, human-computer chess games are a good example of the disconnect: human chess masters only look one or two moves ahead in a chess game and mostly focus on board patterns (try to maximize space covered by a bishop etc.), while chess-playing supercomputers look ahead several moves with fancy heuristics to trim the search space.
                        <p style="font-size:1024px">HTML is disabled in signatures </p>

                        Comment


                        • #13
                          I think the shaky assumption of the technological singularity is why an AI designed to create exponentially more intelligent AIs would stop doing that and start doing something else (like taking over the world).

                          We create AIs to play games, to predict stock trends, to problem solve logistical issues, etc., but I'm not sure that I ever see us creating an AI whose purpose is to better itself. And if we don't do that, what's going to start the out of control acceleration toward a malevolent artificial intelligence?

                          A computer program designed to create computer programs that are better at managing air traffic control patterns is never going to randomly start designing computer programs that are better at engaging in nuclear war.
                          Click here if you're having trouble sleeping.
                          "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

                          Comment


                          • #14
                            The whole point about a true AI is that it's concious and can choose it's own goals. We don't design it with that goal necessarily. Although it's a human goal and we'd be creating it in our image to some extent so it's reasonable to assume it might have it as a goal too. Even if it's just through self improvement rather than replicating superior versions of itself.
                            Jon Miller: MikeH speaks the truth
                            Jon Miller: MikeH is a shockingly revolting dolt and a masturbatory urine-reeking sideshow freak whose word is as valuable as an aging cow paddy.
                            We've got both kinds

                            Comment


                            • #15
                              I just don't see why scientists would create a conscious, sapient AI with no purpose. Scientists have to get their funding from somewhere, after all.

                              And, even if scientists do manage to create a generalized AI that's smarter than we are, why would they give it access to the stock market and the satellites and all that? If you create a strong, sapient AI, put it on a computer not connected to the internet and feed it information via disposable storage media. Come on, don't be stupid.
                              Click here if you're having trouble sleeping.
                              "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

                              Comment

                              Working...
                              X