Why do singularity people assume that the creation of an smarter than human AI will result in an intelligence explosion?
First off we don't have a good idea of how hard it is to create intelligence, also we have no idea how much harder it gets on the upper levels. Is it too hard to imagine the difficulty of creating smarter AI would scale quick enough to kill technological singularity dreams?
Also what about the limits of intelligence? What if the upper limit of attainable intelligence isn't much better than what humanities few super geniuses have reached? (I'm talking about general intelligence here)
First off we don't have a good idea of how hard it is to create intelligence, also we have no idea how much harder it gets on the upper levels. Is it too hard to imagine the difficulty of creating smarter AI would scale quick enough to kill technological singularity dreams?
Also what about the limits of intelligence? What if the upper limit of attainable intelligence isn't much better than what humanities few super geniuses have reached? (I'm talking about general intelligence here)
Comment