Announcement

Collapse
No announcement yet.

Chimpanzees granted 'legal persons' status to defend their rights in court

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Aeson View Post
    Even with 20 years, worry isn't "stupid".
    It is because it isn't even 20 years right now.

    20 years is a sane time scale (a normal one) once then necessary basic research groundwork is laid. There is no evidence that we have even begun to lay it and the irrational belief that it will suddenly happen (be complete) in the next few months is an irrational belief.

    It is just as irrational as what other people have argued for which is that we will do it by accident.

    JM
    Jon Miller-
    I AM.CANADIAN
    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

    Comment


    • Originally posted by Jon Miller View Post
      Huh? You have no basis to say that.

      The 5 years is if I go on the extreme side (about the fastest I think basis research has gone to applied), if you take a more representative time scale is 20 (and like with fusion, that 20 can continue for decades). That is a representative time from complete basic research to complete application.
      "Now maybe the time scale from the basic research to the real AI will be very short. And maybe the basic research will have some huge advance this summer."

      You're all over the place Jon. You claim i t could possibly be as short as 5 years from basic research to implementation. (Though you're still avoiding the self-development and processing power implications.) You say maybe it will have some huge advance this summer. That puts the minimum threshhold at less than 20 years ... but then you say 20 years is stupid to even contemplate.

      All while completely disregarding the impact future generations or even long-lived/young of us.

      The simple reality is that AI holds a tremendous amount of potential. It's development by definition will have an self-development aspect which gives it something different than normal development (how much so is very much an unknown, just that at some point it has to become a factor for it to actually work.)

      Worrying about how that potential will be used when it is developed is not stupid, even if it will take decades to develop.

      (Would you say worrying about sin is stupid because it may be a long time before the second coming?)

      My problem with your statements is that believing in the most ridiculous case without basis is irrational. You can use them to argue that we are just on the verge of developing anti-gravity, faster than light travel, and contact with aliens.
      You're not even close to comprehending my statements. I have not given one timetable for any of those things, and have repeatedly said that giving a timetable for them is just guessing. That is my position. The fact you can't even address it accurately casts a huge shadow on your ability to predict the future. You can't even predict the present in this thread by reading what is actually said.

      Comment


      • 5 years, and even 20 years, (to go from basic research to reality) aren't something to worry about because there is no reason to think that there will be some big jump in basic research in the next few months.

        That pretty much never happens and we have no reason to believe it will happen with intelligence.

        Just like I don't worry or consider it a possibility that we will develop FTL travel in a few years because there might be some big jump in basic research in the next few months.

        The fundamental statement was that there was no reason to be concerned about real AI right now, just like there is no reason to be concerned about FTL travel or anti-gravity or making black holes.

        I am not saying it is impossible, it is just that there is no reason (Scientifically, I am not talking belief wise), to expect it to happen.

        I think it is likely we will have the first 'unicorn' born before we get real AI.

        JM
        (If you believe then that is OK... but then point out that it is because of your non-scientifically backed belief and those that don't share it can ignore.)
        Jon Miller-
        I AM.CANADIAN
        GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

        Comment


        • I completely admit my expectations with respect to the 2nd coming are based on belief and not science.

          I think I following my Christian belief system to the logical conclusion, but it is still based on belief.

          JM
          Jon Miller-
          I AM.CANADIAN
          GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

          Comment


          • Originally posted by Jon Miller View Post
            No, it isn't faith based. It is based on that in the past, in every part of human advancement, we have known something about where we were going before we got there.

            Often the fundamentals were studied and known to experts far before the shocking advance (to laymen) occur. The advances didn't come out of left field.

            You believe there is something special about CS/intelligence/etc that means that it won't follow the rules. This could happen.

            Just like aliens could show up in a spaceship. Or Christ could return.

            But those things are faith based, we don't have evidence to make such predictions.

            Unlike the advance of science for the last several hundred years.

            JM
            Do you have a statistical likelihood of your prediction? I don't see how. If not, your belief is faith based.
            I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
            - Justice Brett Kavanaugh

            Comment


            • Originally posted by Jon Miller View Post
              Just like I don't worry or consider it a possibility that we will develop FTL travel in a few years because there might be some big jump in basic research in the next few months.
              FTL is a rather different subject given what we know about it. My own laymen's understanding being with enough power it's theoretically possible, but survivability would be questionable at best and we don't have any energy source even close to capable.

              AI on the other hand, we may very well have the processing power already. There doesn't seem to be a well-understood reason why it couldn't exist on modern hardware. (Though there's always the possibility we'll find such a reason.)

              I am not saying it is impossible, it is just that there is no reason (Scientifically, I am not talking belief wise), to expect it to happen.
              You don't have to expect it to happen in a specific timeframe to be able to address it and the issues it will raise. It's not stupid to do so. In fact such a thing is rather fun to do.

              I think it is likely we will have the first 'unicorn' born before we get real AI.
              If you mean something along the lines of a genetically modified horse with a horn it's forehead, then that's not saying much. We could probably have done such a thing before now if we wanted to enough. We've done weirder things with genetic mutation already.

              Comment


              • Originally posted by Jon Miller View Post
                I think it is likely we will have the first 'unicorn' born before we get real AI.
                Why would the worries people have raised over the potential dangers of AI require 'real AI' (if you mean that in the way I believe you do)? Feel free to dumb down your question for the stupid layman of course, as the years I spent studying computer science and software engineering at university were clearly wasted.

                Comment


                • The Terminator/Matrix style fears I've encountered (I don't know what exactly, say, Stephen Hawking is worried about) would seem to depend on our creating a superhuman intelligence and then trusting it with more power than we would ever give to a flesh-and-blood human being. I'm hard-pressed to think of a scenario where this would actually happen. Are there less far-fetched worries out there?
                  1011 1100
                  Pyrebound--a free online serial fantasy novel

                  Comment


                  • Originally posted by kentonio View Post
                    Why would the worries people have raised over the potential dangers of AI require 'real AI' (if you mean that in the way I believe you do)? Feel free to dumb down your question for the stupid layman of course, as the years I spent studying computer science and software engineering at university were clearly wasted.
                    Because the other 'dangers' are like worrying about a plane engine freeze up. (point being is that they are no different then a door jamming, in 'kind')

                    Something to worry a bit about and a reason to do proper maintenance but not a global catastrophe and not 'intelligence'.

                    JM
                    Last edited by Jon Miller; April 24, 2015, 07:20.
                    Jon Miller-
                    I AM.CANADIAN
                    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                    Comment


                    • Originally posted by Aeson View Post
                      AI on the other hand, we may very well have the processing power already. There doesn't seem to be a well-understood reason why it couldn't exist on modern hardware. (Though there's always the possibility we'll find such a reason.)
                      Why couldn't it happen on the original IBM mainframe?

                      It is a silly question because there is no reason to believe that the computers of today are any more capable of having real intelligence than the computers of 50 years ago.

                      This is one of my points.

                      JM
                      Jon Miller-
                      I AM.CANADIAN
                      GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                      Comment


                      • Originally posted by Elok View Post
                        The Terminator/Matrix style fears I've encountered (I don't know what exactly, say, Stephen Hawking is worried about) would seem to depend on our creating a superhuman intelligence and then trusting it with more power than we would ever give to a flesh-and-blood human being. I'm hard-pressed to think of a scenario where this would actually happen. Are there less far-fetched worries out there?
                        Prof Stephen Hawking, one of the world's leading scientists, warns that artificial intelligence "could spell the end of the human race".

                        Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

                        He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

                        His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.
                        But others are less gloomy about AI's prospects.

                        The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

                        Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

                        Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
                        I suppose this is what Jon is referring to. Given that the subject matter was related to what he was asked about, speaking about his views on it hardly seems "stupid" though. Especially since it seems very well qualified as a response to the hypothetical of a human or human+ AI.

                        Comment


                        • Yes, I've run into that article, but how? How, exactly, are computers going to displace us, unless somebody's planning to create self-replicating superbots or give them control of nukes, etc.? Has anyone specified?
                          1011 1100
                          Pyrebound--a free online serial fantasy novel

                          Comment


                          • But the 'basic AI' is the same as what I and HC work with and it isn't intelligence in the manner that he (or you) think of as intelligence to at all. It is not really learning in the same way a child would learn, it is just demonstrating pattern matching in a constrained space.

                            Unfortunately I am not as famous as he is so no one asks me what I think.

                            So I post on Apolyton.

                            JM
                            (if it was just the Savas, Kentonios and even Aesons of the world I would probably make no more than 3 posts)
                            Jon Miller-
                            I AM.CANADIAN
                            GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                            Comment


                            • Originally posted by Jon Miller View Post
                              Why couldn't it happen on the original IBM mainframe?

                              It is a silly question because there is no reason to believe that the computers of today are any more capable of having real intelligence than the computers of 50 years ago.

                              This is one of my points.
                              I never said it couldn't ... it's not clear what the minimum specs would be for real intelligence.

                              However, more processing power and more ready access to information are certainly a plus ...

                              Comment


                              • Why?

                                I agree that they would be a plus once you have real machine intelligence. But why are they necessary a plus or a step forward in creating it?

                                I predict that some portion of the basic research required will not require any physical computers at all.

                                JM
                                Jon Miller-
                                I AM.CANADIAN
                                GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                                Comment

                                Working...
                                X