Announcement

Collapse
No announcement yet.

The problem with Industry 4.0 is not primarily financial

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    A network of human minds would result in a whole that is more capable than a single human. Having a brain with a high bandwidth, high frequency pathway to lossless memory and computer processing is going to be more capable than a single human. GM holds the potential to increase the size of the human brain as well.

    So we know for sure that a biological platform has more potential than just a single human mind. Even the best of them.

    Comment


    • #32
      Okay, before I make myself go to bed: A network of human minds would require actually getting multiple human minds to coordinate seamlessly. Not clear that this is possible without what's generally called "crimes against humanity." Which might make the minds less useful anyway. Connecting a brain directly to high-powered computers could just as easily lead to bugs, hangups and errors as any genuine increase in productivity (or just wiki/porn surfing when you're supposed to be working). Increasing the size of the human brain assumes brain function improves with simple scaling, leaving out the necessity of c-sections and various other anatomical/physiological difficulties.
      1011 1100
      Pyrebound--a free online serial fantasy novel

      Comment


      • #33
        Originally posted by Elok View Post
        1 and 2 don't appear to be addressing what he said in the respective points. I don't think he mentions morality or childrearing in the whole article.
        In his first point (eg. 1) he was arguing that people are saying that intelligence is a single dimension. I don't know if anyone seriously even says this. The depiction of intelligence he's arguing against is intentionally and knowingly overly simplistic. It's not a roadmap for how intelligence works, but more along the lines of a relative ranking of overall capability.

        Humans aren't necessarily better than apes at everything, but humans could easily wipe out all apes if we wanted to. That's the hierarchy people are talking about. General applied power.

        Not clear what 2 is responding to.
        His #2 misconception: "We’ll make AIs into a general purpose intelligence, like our own."

        It's a matter of will, not possibility. And many in the field of AI see that as the ultimate goal. So the will is there.

        3 we have proven that biological computing has the ability to make something as smart as the smartest humans, no smarter.
        Except at the very least you can augment them (with silicon or quantum computers that do specific things much, much better), you can put them in parallel. GM brains that have more memory and processing power, fewer hormonal distractions, etc.

        And that's at the very least. It is extremely unlikely that the human brain is the absolute pinnacle of what is possible with biological computing.

        4 his point is, again, that we have no evidence that it's possible to increase intellect all that significantly beyond our own.
        Sure we do. We have an obvious example of parallel network of human minds doing things that no individual ever could. The individual has been surpassed by many orders of magnitude by the collective. Even though it has been done on a horribly poor network implementation w ith massive packet loss, terrible bandwidth, mechanical implements often intentionally destroying vast portions of the network, etc.

        5 In order for a disease to simulate the human body reliably, it has to understand all relevant aspects of human body systems. Which is to say, it would have to have very nearly solved the problem already (or had same solved for it), which is his point. A crapton of observational and inferential legwork has to be done before the computer can even get started.
        His point was that something superintelligent (which is a given in his #5) couldn't solve problems via simulation. We already are able to solve many problems via simulation, as so some expert AIs. A superintelligent entity almost surely would be able to do better.

        Also he's ignoring that an AI could have sensory and motor implements under it's control to do the legwork. It is after all one of the main reasons humans want to develop AI, to have it do the work of controlling all these other tools, and bypassing the bottlenecks between tool and mind.

        Comment


        • #34
          Originally posted by Elok View Post
          Okay, before I make myself go to bed: A network of human minds would require actually getting multiple human minds to coordinate seamlessly. Not clear that this is possible without what's generally called "crimes against humanity." Which might make the minds less useful anyway. Connecting a brain directly to high-powered computers could just as easily lead to bugs, hangups and errors as any genuine increase in productivity (or just wiki/porn surfing when you're supposed to be working). Increasing the size of the human brain assumes brain function improves with simple scaling, leaving out the necessity of c-sections and various other anatomical/physiological difficulties.
          There's really no reason why we'd use human brains. I'm just using it as the absolute baseline to prove "yes, it definitely is possible in the physical realm to have a human-like intelligence operating on a biological computing platform" ... because we already are.

          Biological computing is guaranteed to have a much, much higher ceiling than what an individual human mind is capable of. Again, I use human brains in parallel just as the baseline, already proven.

          Comment


          • #35
            However, we have also always been able to explain exactly how all our tools worked. This is no longer the case with AI that utilizes deep learning. We are working on this, but things are getting beyond us.
            “It is no use trying to 'see through' first principles. If you see through everything, then everything is transparent. But a wholly transparent world is an invisible world. To 'see through' all things is the same as not to see.”

            ― C.S. Lewis, The Abolition of Man

            Comment


            • #36
              Originally posted by Broken_Erika View Post
              What do you think will happen when they come up with real life holodecks?




              Spoiler:

              No, I did not steal that from somebody on Something Awful.

              Comment


              • #37
                Originally posted by pchang View Post
                However, we have also always been able to explain exactly how all our tools worked. This is no longer the case with AI that utilizes deep learning. We are working on this, but things are getting beyond us.
                I don't agree with this. All 'deep learning' does is remove the feature extraction from procedural human algorithms and put it into the MLA. I would argue that this feature extraction is actually clearer then the non-linear classification/regression done by the sophisticated algorithms.

                JM
                (Last year I moved from SVM/kNN to DCNNs... I had preferred SVM because for me it was easier to visualize although I understand that many preferred BDT for those reasons. However, I think it is still the case that NN/BDT/SVM are equivalent once you don't consider feature extraction.)

                (Yes, some groups use really crazy deep networks, but we have seen that these seem equivalent to much simpler deep networks.... )
                Jon Miller-
                I AM.CANADIAN
                GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                Comment

                Working...
                X