Announcement

Collapse
No announcement yet.

Chimpanzees granted 'legal persons' status to defend their rights in court

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Hauldren Collider View Post
    Arguing with you is almost as bad as arguing with Asher. In order for Jon to be willing to discuss with you, you have to be willing to listen.
    on trying to needle Asher back to the forum ...

    Comment


    • Originally posted by Hauldren Collider View Post
      Arguing with you is almost as bad as arguing with Asher. In order for Jon to be willing to discuss with you, you have to be willing to listen.
      You seem to routinely struggle with the idea that someone understands your position very clearly and merely disagrees with the conclusions you draw from it. I'm not questioning Jon's (or yours for that matter) knowledge or experience, I think you're just ignoring the wider issues. As we develop increasingly complex software with even the basic kinds of AI we recognize today in charge of increasingly important systems, the dangers of unpredictable and potentially hugely dangerous outcomes increase. I'm not talking about some sentient AI with ambitions of world domination, but even a fairly simple set of programs that seek to carry out a task in the most efficient way can arrive at conclusions that would scare the living **** out of us and which more importantly we simply wouldn't expect. That's what I'm talking about when I talk about emergent behaviour, and when you start to combine different systems the chances of predicting that behaviour becomes less and less plausible.

      Comment


      • Originally posted by kentonio View Post
        You seem to routinely struggle with the idea that someone understands your position very clearly and merely disagrees with the conclusions you draw from it. I'm not questioning Jon's (or yours for that matter) knowledge or experience, I think you're just ignoring the wider issues. As we develop increasingly complex software with even the basic kinds of AI we recognize today in charge of increasingly important systems, the dangers of unpredictable and potentially hugely dangerous outcomes increase. I'm not talking about some sentient AI with ambitions of world domination, but even a fairly simple set of programs that seek to carry out a task in the most efficient way can arrive at conclusions that would scare the living **** out of us and which more importantly we simply wouldn't expect. That's what I'm talking about when I talk about emergent behaviour, and when you start to combine different systems the chances of predicting that behaviour becomes less and less plausible.
        Kentonio, current AI programs don't really "arrive at conclusions"*. Again, you need to think of ML as being more like a human exercising to get more muscular rather than someone deciding to do something very differently. The closest we may see to that is some sort of game AI, like chess or othello or whatnot. The thing is, those games are incredibly constrained. Yes, chess has more possible game states than (I believe) there are atoms in the universe. But even that number is miniscule compared to the state space of most applications of artificial intelligence--and so the kind of search algorithms capable of reaching "conclusions" and doing those sorts of surprising things are incredibly limited in their applications because the space they would have to search to do so is too huge to be feasible.

        If you are simply referring to a computer system going haywire and doing something stupid, like some sort of trading program selling a bunch of stocks idiotically, or a factory robot ****ing around and messing stuff up, that is already eminently possible with computer programs now and as far as I know the AI/ML algorithms don't really increase the danger of it.

        *They can "conclude" things per se but not in the way you are thinking
        If there is no sound in space, how come you can hear the lasers?
        ){ :|:& };:

        Comment


        • I'll give you an example of the kind of system I'm talking about when I have a bit more time. It's now wine o'clock though.

          Comment


          • Originally posted by Jon Miller View Post
            But the 'basic AI' is the same as what I and HC work with and it isn't intelligence in the manner that he (or you) think of as intelligence to at all. It is not really learning in the same way a child would learn, it is just demonstrating pattern matching in a constrained space.

            Unfortunately I am not as famous as he is so no one asks me what I think.

            So I post on Apolyton.

            JM
            (if it was just the Savas, Kentonios and even Aesons of the world I would probably make no more than 3 posts)
            It's not just that you aren't as famous.
            I drank beer. I like beer. I still like beer. ... Do you like beer Senator?
            - Justice Brett Kavanaugh

            Comment

            Working...
            X