Announcement

Collapse
No announcement yet.

Chimpanzees granted 'legal persons' status to defend their rights in court

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by kentonio View Post
    Why in their current position would a chimp set out to solve a maths equation, build a rocket or argue over the internet? Put a primitive human in captivity by another species with only the most rudimentary lines of communication between them and see how clever Mr Caveman suddenly seems.
    Click image for larger version

Name:	unfrozen-caveman-lawyer.jpg
Views:	1
Size:	108.3 KB
ID:	9101603
    No, I did not steal that from somebody on Something Awful.

    Comment


    • Fantastic news. When is Kentonio applying for citizenship?
      Scouse Git (2) La Fayette Adam Smith Solomwi and Loinburger will not be forgotten.
      "Remember the night we broke the windows in this old house? This is what I wished for..."
      2015 APOLYTON FANTASY FOOTBALL CHAMPION!

      Comment


      • When are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?
        1011 1100
        Pyrebound--a free online serial fantasy novel

        Comment


        • Originally posted by Hauldren Collider View Post
          Artificial intelligence as a term is a misnomer. There is to my knowledge no serious research currently towards the kind of fanciful science fiction intelligence Kentonio or Elok are describing. In the early days of the field, that was the ultimate goal--a machine that could learn, just like a human. The Turing test and so on. Nowadays the field of Artificial Intelligence is really more of a milieu of algorithms with useful applications in computer vision, speech recognition, language processing, and similar tasks involving interaction with the real world or with human communication. These algorithms run the gamut from simple graph-search algorithms to function optimization to statistical models. Much of the interesting work in AI is really just understanding how to represent input data and reduce the dimensionality of the input into something manageable and interpretable.

          I have studied this subject extensively--Modern artificial intelligence is more or less the intersection between Statistics and Computer Science, my two fields of study.
          It doesn't have to be an intelligence that thinks like a human, it could be an intelligence that might be completely alien to us and which is simply a product of emergent behaviour. The more complex AI becomes the higher the likelihood of emergent behaviour appearing. It doesn't need to be a self aware system, it could be something as simple as a system that concludes that based on available data the most logical course of action is one that as a by product also ****s humans quite severely.

          Originally posted by Hauldren Collider View Post
          Even machine learning, which has advanced in tremendous strides over the last decade, is not really learning as you or I would think of it. And to draw a contrast between how, for example, a human child would learn and how a machine learning algorithm would learn: Let's say you are walking down the street with your child and you see a dog. You point to the child and say, "doggy!" The child now has a very good idea of what a dog is, from a single positive example, and no negative examples. The child may then see a cat, say "doggy!" and you correct him and say "kitty!" The child is probably now able to recognize a dog even if it is a different size or a different breed and at any angle. By contrast, you could take the best machine learning algorithm with the latest heuristics and sophisticated statistical or nonparametric models like ANNs, random forests, whatever, and train it with 100,000 positive and negative examples and it'll maybe do better than a coin flip when you ask it "is there a dog in this picture?"

          Why is the child able to do so much better than the computer? Simple: The kid's cheating. We have a million+ years of evolution granting us instincts on how to recognize everyday things and comprehend the world around us. I suspect someday we'll be able to get the same or similar ability onto a computer, but until then advantage humans.
          No, not advantage humans. All you did there is pick out an example of where the programming of humans is able to perform a task more efficiently than a machine currently can. Change that example to something like looking at a vast database of data and picking out the mathematical patterns, and suddenly we're firmly in the 'advantage computers' side of the court.

          Originally posted by Hauldren Collider View Post
          Any program which is capable of "learning" in even the most trivial sense is self-altering. I suspect Kentonio that you are not really familiar with the concept of self-altering programs. It is not like a human waking up one day and saying, "I am tired of being a software engineer. I am going to learn to play piano." It is more like a human exercising on a bench press and getting stronger arms.

          Google search is self altering; every time you type a phrase into it it remembers that and uses it in future predictions. This already does lead to occasionally surprising behavior that we would be unlikely to anticipate. What is important to understand is that machine learning algorithms are built around extremely concrete goals. Programs optimize themselves towards these goals. They are not abstract. There are algorithms for what statisticians refer to as "unsupervised learning", which is closely related to density estimation, but these too do not "learn" in the human sense. They merely identify patterns in unlabelled data within a set of tuned constraints. Often these algorithms are able to recover useful information, but your computational genomics software isn't going to start looking at a set of alleles and suddenly decide that it wants to get married to that sexy Japanese inflatable lovebot.
          The point here is nothing to do with an AI desiring to do anything, it's all about emergent behaviour which you admit yourself occurs. An AI doesn't need to suddenly decide it wants to do something, the problem is that because we DON'T think strictly like computers, we're unlikely to be able to always predict the behaviour that may emerge from a hugely complex system. The more control we pass to these systems, the more we open ourselves up to potentially destructive consequences. We're going to have to do it anyway, because it's the path to advancing as a species, but it's not a bad thing to draw attention to it and tell people they really need to be bloody careful.

          Comment


          • Originally posted by The Mad Monk View Post
            Wait.

            What did you do with all that barbecue sauce I gave you???
            I didn't say I was a vegetarian, I just said there were lots of things he doesn't know about me. /pedant

            Comment


            • When are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?
              You can snort horseradish? You learn something new everyday.
              Scouse Git (2) La Fayette Adam Smith Solomwi and Loinburger will not be forgotten.
              "Remember the night we broke the windows in this old house? This is what I wished for..."
              2015 APOLYTON FANTASY FOOTBALL CHAMPION!

              Comment


              • Originally posted by kentonio View Post
                It doesn't have to be an intelligence that thinks like a human, it could be an intelligence that might be completely alien to us and which is simply a product of emergent behaviour. The more complex AI becomes the higher the likelihood of emergent behaviour appearing. It doesn't need to be a self aware system, it could be something as simple as a system that concludes that based on available data the most logical course of action is one that as a by product also ****s humans quite severely.
                No. Kentonio, "emergent" behavior is not a thing. At best, what we see are results that are surprising, which is honestly comforting because if ML algorithms didn't come up with results we did not already expect they would be fairly useless.
                No, not advantage humans. All you did there is pick out an example of where the programming of humans is able to perform a task more efficiently than a machine currently can. Change that example to something like looking at a vast database of data and picking out the mathematical patterns, and suddenly we're firmly in the 'advantage computers' side of the court.
                Kentonio, you are missing something here. Everything is a vast database of data. A set of images from a video camera is a bunch of 0s and 1s, just like a bunch of readings from a particle accelerator. The difference lies in whether there is an obvious mathematical interpretation or we have to fall back on probabilistic systems. I can write an algorithm to sort a list of numbers that is provably correct. But no one knows how to write an algorithm that is provably correct to read handwritten English lettering. We use these machine learning (really statistical) techniques because it's much easier to make something that is probably correct and can improve with training than come up with something that provably gets the right answer every time.
                If there is no sound in space, how come you can hear the lasers?
                ){ :|:& };:

                Comment


                • Originally posted by Elok View Post
                  When are you going to stick your nose into a jar of horseradish mixed with powdered glass and inhale as hard as you can?
                  That's part of the Passover Seder. He's Catholic, remember?
                  Click here if you're having trouble sleeping.
                  "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

                  Comment


                  • Originally posted by Hauldren Collider View Post
                    No. Kentonio, "emergent" behavior is not a thing. At best, what we see are results that are surprising, which is honestly comforting because if ML algorithms didn't come up with results we did not already expect they would be fairly useless.
                    Yes, emergent behaviour IS a thing. I thought you knew what you were talking about for goodness sake. You see emergent behaviour in even primitive AI systems.

                    Originally posted by Hauldren Collider View Post
                    Kentonio, you are missing something here. Everything is a vast database of data. A set of images from a video camera is a bunch of 0s and 1s, just like a bunch of readings from a particle accelerator. The difference lies in whether there is an obvious mathematical interpretation or we have to fall back on probabilistic systems. I can write an algorithm to sort a list of numbers that is provably correct. But no one knows how to write an algorithm that is provably correct to read handwritten English lettering. We use these machine learning (really statistical) techniques because it's much easier to make something that is probably correct and can improve with training than come up with something that provably gets the right answer every time.
                    It's still ridiculously early in the computer age, and its important to remember that. When I was a kid computer power was ridiculously low and yet in just 3 decades we're now at a point where things we never thought possible to compute are just everyday tools. I have a freakin app on my phone that I can point at a foreign language sign and which converts that text in real time into English. You think handwriting is going to prove insurmountable longer term?

                    The concerns that people like Hawkings are raising is not about the danger of AI today, but about the dangers they are foreseeing as these systems and the computational power behind them drive inexorably forward and as we place these systems in charge of increasingly essential systems and infrastructure. Dismiss that at your peril.

                    Comment


                    • Kentonio, I think you may just lack the background to understand what I am saying. Particularly the distinction I drew between ordinary provable algorithms and ML/AI techniques. Yes I know your phone can recognize handwriting. Usually. It's a probabilistic system. Christ, dude, I write these things. I know how they work.
                      If there is no sound in space, how come you can hear the lasers?
                      ){ :|:& };:

                      Comment


                      • Originally posted by Hauldren Collider View Post
                        Kentonio, I think you may just lack the background to understand what I am saying. Particularly the distinction I drew between ordinary provable algorithms and ML/AI techniques. Yes I know your phone can recognize handwriting. Usually. It's a probabilistic system. Christ, dude, I write these things. I know how they work.
                        Of course its a probabilistic thing for crying out load, my point was that we're still at an incredibly early stage in the computing age, and things that we now can only perform now in a relatively primitive way will increasingly become more and more definitive as computational power increases. All of which is a complete side road incidentally from the larger issue of what we were discussing.

                        Comment


                        • Oh and if you don't think emergent behaviour in AI is a 'thing' you might want to take a look at your own background.

                          Comment


                          • Kentonio is ruining this thread. Why do conservatives have to ruin everything?

                            Comment


                            • But he's not a conservative, that's what makes it worse...
                              "Aha, you must have supported the Iraq war and wear underpants made out of firearms, just like every other American!" Loinburger

                              Comment


                              • It's a hobby.
                                No, I did not steal that from somebody on Something Awful.

                                Comment

                                Working...
                                X