Announcement

Collapse
No announcement yet.

Chimpanzees granted 'legal persons' status to defend their rights in court

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sort of. We cheat with eons of evolution to be able to recognize certain things; artificial neural networks obviously can't do this.

    Neural networks, genetic algorithms, and other machine learning techniques are typically what we use when we haven't got the first idea how to solve a problem. They are not "good" solutions in the presence of any alternatives.

    Comment


    • Originally posted by kentonio View Post
      One major danger with AI isn't that it'll do all the things you program it to do too well, but that it's intelligence will allow it to alter its own programming in ways that we never envisaged. Things like pride, anger, hate etc might not need deliberate programming, but might be emergent behaviour from other intented routines. Even if they are not however, the real fear is that in a simple computation why would a machine put our needs ahead of its own.
      Most of those things exist for definite evolutionary reasons. Aggression, for example, is there because sometimes we just need to kill the hell out of someone or something. Thus there are dedicated structures in the brain for stimulating and controlling aggression. It's not just a little software aberration, it's really quite complex. I don't see how such a thought process would unintentionally derive from, say, "figure out how to fold this protein." And "why would it put its needs ahead of our own" assumes that the computer is motivated to be self-interested. Leaving out all that HC says (we're being fanciful and hypothetical here), why would you create a computer with any concept of self-interest?

      The question of personality basically depends on whether you believe in the supernatural. There is no scientific reason why a collection of organic material should be able to achieve some level of thought process that a purely electronic machine couldn't. If you make a computer advanced enough, then what difference is there?
      You're so busy assuming I'm trying to argue for the soul that you don't appear to have actually read what I said. I'm actually arguing something closer to the opposite. Assuming you have little robot Rene Descartes in there thinking and being, it needs more than just thinking. It also needs the desire to turn those thoughts in a particular direction, or else it could conceivably use its vast circuit-power to muse on Hello Kitty or sporks or whatever info happens to flow into its consciousness. That is, it needs to be a person, more or less, with instincts and an agenda to follow. Giving a bot a stable and functional personality would be an entirely different problem from raw cogitation.
      1011 1100
      Pyrebound--a free online serial fantasy novel

      Comment


      • Originally posted by regexcellent View Post
        Sort of. We cheat with eons of evolution to be able to recognize certain things; artificial neural networks obviously can't do this.
        You can however design neural networks with a structure that allows them to converge on a correct function more quickly and accurately.
        If there is no sound in space, how come you can hear the lasers?
        ){ :|:& };:

        Comment


        • HC is a good bit wiser on this subject than Musk or Hawking.

          I think that is because he (and I) have done work with 'machine learning' and Musk and Hawking haven't.

          For a possible reason why computers might not be capable of something that humans can, see Dyson (really one of the most amazing scientists, he is my hero):


          "The superiority of analog-life is not so surprising if you are familiar with the mathematical theory of computable numbers and computable functions. Marian Pour-El and Ian Richards, two mathematicians at the University of Minnesota, proved a theorem twenty years ago that says, in a mathematically precise way, that analog computers are more powerful than digital computers. They give examples of numbers that are proved to be non-computable with digital computers but are computable with a simple kind of analog computer. The essential difference between analog and digital computers is that an analog computer deals directly with continuous variables while a digital computer deals only with discrete variables. Our modern digital computers deal only with zeroes and ones. Their analog computer is a classical field propagating though space and time and obeying a linear wave equation. The classical electromagnetic field obeying the Maxwell equations would do the job. Pour-El and Richards show that the field can be focussed on a point in such a way that the strength of the field at that point is not computable by any digital computer, but it can be measured by a simple analog device. The imaginary situation that they consider has nothing to do with biological information. The Pour-El-Richards theorem does not prove that analog-life will survive better in a cold universe. It only makes this conclusion less surprising."

          JM
          Jon Miller-
          I AM.CANADIAN
          GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

          Comment


          • I don't know that the distinction between analog and digital is super important given that the human brain operates in a largely discrete fashion--neurons generally fire discretely, not continuously. Also, because 64-bit floating point numbers have enough precision for just about anything. However, the manner in which the human brain actually learns is poorly understood and not really conducive to emulation by a computer. Artificial neural networks are inspired by the human brain, but do not operate all that similarly. The human brain, from what I understand, is essentially a highly tuned massively parallel processor.

            I should note I don't really know much about human cognition. I know a lot about machine learning, but human learning is not my field.
            If there is no sound in space, how come you can hear the lasers?
            ){ :|:& };:

            Comment


            • JM, were they arguing that the digital computer simply can't determine the the density of the field at a point, or that the digital computer can only approximate?

              Because the analog computer, by virtue of being analog, is going to have some error bars on its calculation. If those are as wide or wider than the precision limit of the digital computer, then the digital computer is just as good.
              If there is no sound in space, how come you can hear the lasers?
              ){ :|:& };:

              Comment


              • Originally posted by Hauldren Collider View Post
                JM, were they arguing that the digital computer simply can't determine the the density of the field at a point, or that the digital computer can only approximate?

                Because the analog computer, by virtue of being analog, is going to have some error bars on its calculation. If those are as wide or wider than the precision limit of the digital computer, then the digital computer is just as good.
                The points are complicated, I recommend that you study it yourself.

                JM
                Jon Miller-
                I AM.CANADIAN
                GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                Comment


                • I don't think anyone is saying we currently have computers with human-like learning. So saying we don't have computers with human-like learning now is not much of an argument. It's not unreasonable to suspect that at some point we will have computers human-like learning potential. How far we are from that is next to impossible to say since we don't really understand the nature of what separates what we have (hardware that probably is already capable) from it.

                  In any case, it's very easy to say that such an entity would be very dangerous (or rather has a huge range of potential both positive and negative).

                  Comment


                  • My point is that we don't have computers with animal like learning.

                    There is a huge shift from animal to human.

                    We don't have anything to fear from current development lines.

                    We aren't going to create true AI by accident.

                    JM
                    Jon Miller-
                    I AM.CANADIAN
                    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                    Comment


                    • Originally posted by Jon Miller View Post
                      We aren't going to create true AI by accident.

                      JM
                      Basically, this. Maybe we could develop the technology someday, but it wouldn't just be Wolfram Alpha suddenly turning into skynet.
                      If there is no sound in space, how come you can hear the lasers?
                      ){ :|:& };:

                      Comment


                      • Our intelligence (and that of animals, which is in many ways operates the same way, only having undergone different pressures/mutations/permutation) happened by accident. We already have hardware platforms that have the computational power to far surpass human level intelligence. (In some ways it already does.) It's the software that's missing the "how" to direct it into being truely self-developing.

                        Mainly because we don't understand the "how" of our own intelligence. It's not inconceivable that a similar accident could happen on a computer platform. Since we don't know exactly what would be required for that to happen, it's not possible to say how far we are from such an accident (or intended effect) being possible to happen.

                        Comment


                        • Originally posted by Hauldren Collider View Post
                          Basically, this. Maybe we could develop the technology someday, but it wouldn't just be Wolfram Alpha suddenly turning into skynet.
                          I don't think anyone sane is suggesting Wolfram Alpha would become skynet. The nature of the hypothetical is that a fundamental change in methodology is developed.

                          Comment


                          • We interact with our environment... the world is a computer.

                            Comment


                            • Originally posted by Aeson View Post
                              Our intelligence (and that of animals, which is in many ways operates the same way, only having undergone different pressures/mutations/permutation) happened by accident. We already have hardware platforms that have the computational power to far surpass human level intelligence. (In some ways it already does.) It's the software that's missing the "how" to direct it into being truely self-developing.
                              Yes and no. A computer processor is largely a serial device. Human brains operate in massive parallel--out of necessity, because the reaction time of a neuron is extremely slow by electronics standards. The architecture of computer chips is really nothing like that of our brains, and so I don't believe that current hardware could exceed human intelligence. Maybe with a massively parallel architecture. So if you count a large computer cluster, then yeah, maybe.

                              Originally posted by Aeson View Post
                              I don't think anyone sane is suggesting Wolfram Alpha would become skynet. The nature of the hypothetical is that a fundamental change in methodology is developed.
                              As I said before, I don't believe such a methodology is the subject of any current serious research.
                              If there is no sound in space, how come you can hear the lasers?
                              ){ :|:& };:

                              Comment


                              • Originally posted by Hauldren Collider View Post
                                Yes and no. A computer processor is largely a serial device. Human brains operate in massive parallel--out of necessity, because the reaction time of a neuron is extremely slow by electronics standards. The architecture of computer chips is really nothing like that of our brains, and so I don't believe that current hardware could exceed human intelligence. Maybe with a massively parallel architecture. So if you count a large computer cluster, then yeah, maybe.
                                Um... we already do have a large computer cluster. In fact we may be using it right now to converse.

                                As I said before, I don't believe such a methodology is the subject of any current serious research.
                                a) we don't know exactly what that methodology entails, so it's impossible to say if it is or isn't involved in research.

                                b) there are definitely intents to research that methodology (whether they actually are researching it or not) going on in academia, and to some extent in industry.

                                c) there's no way for us of knowing if there are classified projects by industry or governments with that intent. I would be surprised if there aren't any ongoing anywhere in the world. It could be weaponized (both physically and informationally) after all.

                                Comment

                                Working...
                                X