Announcement

Collapse
No announcement yet.

Do singularity people consider this possibility?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Elok, hopefully KH will respond nicely. You two are having a very educational conversation.




    blau, I'm starting to get really tired of watching you pretend to have ANY knowledge of the **** you blab on about.
    12-17-10 Mohamed Bouazizi NEVER FORGET
    Stadtluft Macht Frei
    Killing it is the new killing it
    Ultima Ratio Regum

    Comment


    • #92
      Originally posted by Elok View Post
      Lori, I don't know enough math to understand the Wiki on ATP (sorry, you're arguing with a '**** here). I just don't have a head for math beyond Pre-calc, and "proving" theorems was always beyond me. But it says early in the article that such programs often have difficulty recognizing invalid formulations. Is there in fact a computer that could step back and say, "no, Pembrose triangles are impossible?"
      Elok, the only reason certain tasks like that are "intuitive" to humans are because over the eons we have evolved hyperspecialized circuits in the brain that already know the answer to large classes of problems. It's the equivalent of taking one of these general-purpose AIs and adding some code that says "oh, if you see this case we just already know the answer is X, don't bother running all of your logic on it".

      Comment


      • #93
        1. Is the computer really thinking, or is it just modeling an illusory artifact which is giving the appearance of thought in obedience to a command? Well, I guess it doesn't matter except to Agathon and FakeBoris, so scratch that.


        Strong vs. weak AI argument. Old hat. Has no bearing on how they manipulate external environment (design a successor).

        2. Doesn't a perfect simulation necessitate a perfect understanding of how the human mind works? That's a hurdle.


        We will make finer and finer approximations. Nobody will ever have a perfect understanding of any physical system. Eventually we will get to the point where the AI displays human behaviour.

        3. If I'm reading you correctly, you're talking about a massive program here. It'd have to model the behavior of every electron crossing every synapse, and the responsive behavior of every neuron.


        You're assuming that we need a certain level of simulation. What if the "crossover" happens earlier (i.e. stuff below a higher level than that doesn't really matter)

        I know this is just a thought experiment, but that's unfeasibly huge. Is there enough silicon, or other material, in the earth to accurately model all those trillions of "parts?"


        Errr....yes. We're already at billions of transistors on single chips and trillions of bits on single hard drives.

        4. The human mind works as part of a human body. Would it simulate a body too, hormones and everything, or just offer rough approximations? Where is this brain-sim getting its sense data from? Or is it a Descartes-bot, sitting in darkness and figuring, "hell, I've got to exist or I wouldn't be able to wonder?"


        Although the Cartesian split is not 100% clean, I think it's reasonable to assume that if the body comes into it, a much grainier level of simulation would be necessary than for the brain itself.

        5. As a followup, would such a computer "think" anything but "AAAAH I'M TRAPPED IN A GIGANTIC HEAP OF SILICON"?


        I HAVE NO MOUTH, AND I MUST SCREAM.
        12-17-10 Mohamed Bouazizi NEVER FORGET
        Stadtluft Macht Frei
        Killing it is the new killing it
        Ultima Ratio Regum

        Comment


        • #94
          Originally posted by Kuciwalker View Post
          Elok, the only reason certain tasks like that are "intuitive" to humans are because over the eons we have evolved hyperspecialized circuits in the brain that already know the answer to large classes of problems. It's the equivalent of taking one of these general-purpose AIs and adding some code that says "oh, if you see this case we just already know the answer is X, don't bother running all of your logic on it".
          I think there are also probably subconscious routines that take experiences and code generalizations into "intuition". I doubt that all of our intuitions are hard-coded at birth...
          12-17-10 Mohamed Bouazizi NEVER FORGET
          Stadtluft Macht Frei
          Killing it is the new killing it
          Ultima Ratio Regum

          Comment


          • #95
            Originally posted by KrazyHorse View Post
            blau, I'm starting to get really tired of watching you pretend to have ANY knowledge of the **** you blab on about.
            Be careful about calling out the kettle.
            No matter where you go, there you are. - Buckaroo Banzai
            "I played it [Civilization] for three months and then realised I hadn't done any work. In the end, I had to delete all the saved files and smash the CD." Iain Banks, author

            Comment


            • #96
              Originally posted by KrazyHorse View Post
              Hey, Elok: I haven't called you any names (other than "son", which is mildly insulting but not really at the level of "****ing clueless moron").

              The reason I'm getting so frustrated with you is NOT your lack of knowledge of computer science (actually, I'm completely clueless for much of what's consdiered compsci). It's your failure to read and respond to the points I've raised.

              Nobody is saying that simple hardware acceleration from here -> singularity. What we are saying is that there are no fundamental problems making the construction of a "real" AI; one which displays all the precious attributes of humanity you so vehemently defend as unique. Once we have accomplished this (or possibly even less than this?) it's reasonable to assume that such an AI could upgrade itself, either by making a simply better AI (software) or by a hardware upgrade. Hardware upgrades on a true AI would, as far as I can see, be indistinguishable to human beings from increases in intelligence. This is not a question of making an autistic faster at card counting; this is about granting a human-equivalent machine (!) machine, with ALL of the introspective, intuitive, creative etc. attributes that implies, the ability to think faster (processor) and more deeply (RAM) about problems using more knowledge (hard drive).
              Thanks for chilling. This is tricky for me to think about, but the problem I see is, how do you make a computer learn like a human does? Is it possible? Are there any experiments which have successfully made a computer accumulate knowledge like a person, with a consequent rise in consciousness (assuming that's how it works in people; we still don't know)? I view consciousness as inextricable from the problem because the ability to examine one's own assumptions necessarily implies an awareness of one's own thought processes.

              The closest I've heard about was something weird in an old episode of Smithsonian that I didn't entirely understand: some dude invented a spider-bot with no computer at all, just a batch of transistors crosswired to every leg like all hell. It learned like an infant, thrashing its legs to try and move forward (okay, Lori, I guess that is trial and error, at least in the initial stages). When it ran into obstacles, it would "learn" after a fashion, because the electrons flowing through it would seek the path of least resistance, which was formed by the precise movement of legs that led it over the obstacle. Over repeated trials the path of electrons would sort of burn in somehow...well, I didn't get it enough to describe it properly, but the point is that it was spooky how well this thing learned with no real brain, it was the closest to true learning supposedly discovered thus far, and there was nothing logical about it whatsoever. It functioned by way of a bunch of random firings that somehow, in some baffling way, added up to the appearance of learning behavior.

              Can we be certain we'll move beyond that? We might be able to perfectly replicate a human brain, but to do better would require us to know how it is that a human brain works (i.e., completely different from any computer ever built), and improve upon it. Can we actually improve upon it? I don't mean just processing speed; you could improve that forever and the robo-brain would just come to the same damned conclusions a human brain would, only faster. That's assuming that it didn't accumulate weird psychoses at a breakneck pace, the way a human brain would working at that speed without rest. Can we do better than evolution has at making something that flexible, with fewer tradeoffs? I don't see any reason to think so.
              1011 1100
              Pyrebound--a free online serial fantasy novel

              Comment


              • #97
                blau

                "I'm rubber and you're glue" isn't much of a response.
                12-17-10 Mohamed Bouazizi NEVER FORGET
                Stadtluft Macht Frei
                Killing it is the new killing it
                Ultima Ratio Regum

                Comment


                • #98
                  Thanks for chilling.


                  Go **** yourself. How's that for chill? I didn't call you any names, despite your claim that I did, and I'm quite calm about this. But if you want to discuss this then you have to read what I write instead of continually raising the SAME POINT that I explained was irrelevant in my first post.
                  12-17-10 Mohamed Bouazizi NEVER FORGET
                  Stadtluft Macht Frei
                  Killing it is the new killing it
                  Ultima Ratio Regum

                  Comment


                  • #99
                    Originally posted by KrazyHorse View Post
                    I think there are also probably subconscious routines that take experiences and code generalizations into "intuition". I doubt that all of our intuitions are hard-coded at birth...
                    I'm generalizing.

                    Comment


                    • Originally posted by KrazyHorse View Post
                      Go **** yourself. How's that for chill?
                      Not very good. The word "chill" generally implies a state of calmness, of being easygoing, whereas the phrase "go **** yourself," regardless of suffixed emoticons, is typically delivered in an angry tone of voice by agitated people.
                      1011 1100
                      Pyrebound--a free online serial fantasy novel

                      Comment


                      • I would go so far as to say that MOST of what we consider intuitive is "learned"...
                        12-17-10 Mohamed Bouazizi NEVER FORGET
                        Stadtluft Macht Frei
                        Killing it is the new killing it
                        Ultima Ratio Regum

                        Comment


                        • People who answer rhetorical questions:
                          12-17-10 Mohamed Bouazizi NEVER FORGET
                          Stadtluft Macht Frei
                          Killing it is the new killing it
                          Ultima Ratio Regum

                          Comment


                          • This is tricky for me to think about, but the problem I see is, how do you make a computer learn like a human does? Is it possible?


                            1) yes, because the human brain is just a really wet, squishy computer
                            2) yes, we have developed learning AIs that individually mimic certain types of human learning
                            3) probably yes, because there's no reason to believe we can't

                            Comment


                            • Sorry, I wasn't programmed to recognize those. I must be missing some hyperspecialized subprocessor.

                              Er, XPost.
                              1011 1100
                              Pyrebound--a free online serial fantasy novel

                              Comment


                              • Elok: it is a basic principle of computer science that all general-purpose computers (not AIs) are equivalent in theoretical power (the set of problems they can solve). A desktop PC is a general purpose computer, and so is a human.

                                Comment

                                Working...
                                X