Announcement

Collapse
No announcement yet.

Chimpanzees granted 'legal persons' status to defend their rights in court

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Sava View Post
    Clearly, you don't have pets.
    I have had lots of pets.

    JM
    (Mostly dogs and cats, but also hamsters, fish, birds, etc... my dad had a monkey but that was before I was born)
    Jon Miller-
    I AM.CANADIAN
    GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

    Comment


    • Re: AI, I have a vague suspicion (not informed by any real education on the subject) that any computer with superhuman intellect would be unstable to the point of impotence. Human geniuses are all too prone to neurosis and self-destructive behaviors, and they don't have a tenth of the existential baggage a computer would be stuck with.

      EDIT: to elaborate, we tend to depict AI as basically human, only sort of nerdy and dispassionate. Or else maniacally insane, depending on the story. A real AI would presumptively not even have most human motivations; why would you want to program a computer to be capable of lust, pride, anger, hate, envy, resentment, fear, etc.? The fears of AI run amok presume that someone, for some reason, will decide to make a computer capable of some combination of ambition, paranoia, vengeance and clannishness.

      But at the same time, it needs to have some kind of motivation to use its gifts. I don't think personality is extricable from intellect. Presumptively it would have no desires beyond a wish to solve problems and serve--a sort of hyperintelligent sycophant. Basically a crippled human. I could easily see such a monomaniacal personality going unhinged if, say, it went too long without a sufficiently challenging problem. This is a common problem with border collies; they're so smart that they need constant stimulation to avoid turning morbidly destructive. Even if it didn't malfunction, it might have a very difficult time understanding human needs, or relating to them.
      Last edited by Elok; April 22, 2015, 16:32.
      1011 1100
      Pyrebound--a free online serial fantasy novel

      Comment


      • Originally posted by Jon Miller View Post
        I have had lots of pets.

        JM
        (Mostly dogs and cats, but also hamsters, fish, birds, etc... my dad had a monkey but that was before I was born)
        Here's an intelligence test for you. Abandon a baby outside. Abandon a cat/dog/other animal.

        Who survives?
        To us, it is the BEAST.

        Comment


        • BTW, I don't see many babies or toddlers doing differential equations.
          To us, it is the BEAST.

          Comment


          • Put me in the wilderness and see how I survive versus a young bear. Or put I and a young bear in antartica and see how we survive versus a penguin.

            You aren't comparing intelligence but rather specific skill sets. Your 'test' is no more an intelligence test than what they use for chimps/etc.

            JM
            Jon Miller-
            I AM.CANADIAN
            GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

            Comment


            • I could probably defeat the bear. Not, perhaps, by direct hand to paw combat; but by utilizing my cunning and wits.

              Yes, survival requires a skilkset. One of those skills being intelligence

              To us, it is the BEAST.

              Comment


              • How would 'defeating' the young bear help you to survive in the wilderness?

                And I sorta doubt you would ever even see it. You seem like an ignorant city boy TBH.

                JM
                Jon Miller-
                I AM.CANADIAN
                GENERATION 35: The first time you see this, copy it into your sig on any forum and add 1 to the generation. Social experiment.

                Comment


                • Originally posted by Elok View Post
                  I wouldn't, but mostly because they're dirty animals. They very likely taste like pork.
                  If you ever tasted my long-roasted belly of pork in cider and ginger, you'd by queueing up to chow down on chimp in the hope it came even remotely near.
                  The genesis of the "evil Finn" concept- Evil, evil Finland

                  Comment


                  • IMO chimps are ugly and don't deserve the same rights as dogs, cats and horses.

                    Comment


                    • Originally posted by kentonio View Post
                      It's not ok to hurt a less sophisticated animal, but if you're going to enact cruelty against a creature so close to you on the evolutionary tree as a chimp then you clearly lack the kind of empathy that we require as human beings to exist within a civilized society. I'm not a PETA type person, but if you could be cruel to a chimp and not feel there was anything wrong with that, I sure as **** don't want your evil ass anywhere near me or my family.
                      That's why you wouldn't want a chimp anywhere near you or your family. Or a dolphin.
                      Graffiti in a public toilet
                      Do not require skill or wit
                      Among the **** we all are poets
                      Among the poets we are ****.

                      Comment


                      • Originally posted by Bugs ****ing Bunny View Post
                        If you ever tasted my long-roasted belly of pork in cider and ginger, you'd by queueing up to chow down on chimp in the hope it came even remotely near.
                        Nah, pig has more fat around the belly. Probably cheaper too, and being farther from humans they're somewhat less likely to have something I can catch. I did not mean to malign either pork or your cooking. I actually wish we could eat pork more often, but the wife doesn't much like it and she's usually the cook.
                        1011 1100
                        Pyrebound--a free online serial fantasy novel

                        Comment


                        • Originally posted by Elok View Post
                          Re: AI, I have a vague suspicion (not informed by any real education on the subject) that any computer with superhuman intellect would be unstable to the point of impotence. Human geniuses are all too prone to neurosis and self-destructive behaviors, and they don't have a tenth of the existential baggage a computer would be stuck with.

                          EDIT: to elaborate, we tend to depict AI as basically human, only sort of nerdy and dispassionate. Or else maniacally insane, depending on the story. A real AI would presumptively not even have most human motivations; why would you want to program a computer to be capable of lust, pride, anger, hate, envy, resentment, fear, etc.? The fears of AI run amok presume that someone, for some reason, will decide to make a computer capable of some combination of ambition, paranoia, vengeance and clannishness.
                          One major danger with AI isn't that it'll do all the things you program it to do too well, but that it's intelligence will allow it to alter its own programming in ways that we never envisaged. Things like pride, anger, hate etc might not need deliberate programming, but might be emergent behaviour from other intented routines. Even if they are not however, the real fear is that in a simple computation why would a machine put our needs ahead of its own.

                          Originally posted by Elok View Post
                          But at the same time, it needs to have some kind of motivation to use its gifts. I don't think personality is extricable from intellect. Presumptively it would have no desires beyond a wish to solve problems and serve--a sort of hyperintelligent sycophant. Basically a crippled human. I could easily see such a monomaniacal personality going unhinged if, say, it went too long without a sufficiently challenging problem. This is a common problem with border collies; they're so smart that they need constant stimulation to avoid turning morbidly destructive. Even if it didn't malfunction, it might have a very difficult time understanding human needs, or relating to them.
                          The question of personality basically depends on whether you believe in the supernatural. There is no scientific reason why a collection of organic material should be able to achieve some level of thought process that a purely electronic machine couldn't. If you make a computer advanced enough, then what difference is there?

                          Comment


                          • Artificial intelligence as a term is a misnomer. There is to my knowledge no serious research currently towards the kind of fanciful science fiction intelligence Kentonio or Elok are describing. In the early days of the field, that was the ultimate goal--a machine that could learn, just like a human. The Turing test and so on. Nowadays the field of Artificial Intelligence is really more of a milieu of algorithms with useful applications in computer vision, speech recognition, language processing, and similar tasks involving interaction with the real world or with human communication. These algorithms run the gamut from simple graph-search algorithms to function optimization to statistical models. Much of the interesting work in AI is really just understanding how to represent input data and reduce the dimensionality of the input into something manageable and interpretable.

                            I have studied this subject extensively--Modern artificial intelligence is more or less the intersection between Statistics and Computer Science, my two fields of study.

                            Even machine learning, which has advanced in tremendous strides over the last decade, is not really learning as you or I would think of it. And to draw a contrast between how, for example, a human child would learn and how a machine learning algorithm would learn: Let's say you are walking down the street with your child and you see a dog. You point to the child and say, "doggy!" The child now has a very good idea of what a dog is, from a single positive example, and no negative examples. The child may then see a cat, say "doggy!" and you correct him and say "kitty!" The child is probably now able to recognize a dog even if it is a different size or a different breed and at any angle. By contrast, you could take the best machine learning algorithm with the latest heuristics and sophisticated statistical or nonparametric models like ANNs, random forests, whatever, and train it with 100,000 positive and negative examples and it'll maybe do better than a coin flip when you ask it "is there a dog in this picture?"

                            Why is the child able to do so much better than the computer? Simple: The kid's cheating. We have a million+ years of evolution granting us instincts on how to recognize everyday things and comprehend the world around us. I suspect someday we'll be able to get the same or similar ability onto a computer, but until then advantage humans.

                            Originally posted by kentonio View Post
                            One major danger with AI isn't that it'll do all the things you program it to do too well, but that it's intelligence will allow it to alter its own programming in ways that we never envisaged. Things like pride, anger, hate etc might not need deliberate programming, but might be emergent behaviour from other intented routines. Even if they are not however, the real fear is that in a simple computation why would a machine put our needs ahead of its own.
                            Any program which is capable of "learning" in even the most trivial sense is self-altering. I suspect Kentonio that you are not really familiar with the concept of self-altering programs. It is not like a human waking up one day and saying, "I am tired of being a software engineer. I am going to learn to play piano." It is more like a human exercising on a bench press and getting stronger arms.

                            Google search is self altering; every time you type a phrase into it it remembers that and uses it in future predictions. This already does lead to occasionally surprising behavior that we would be unlikely to anticipate. What is important to understand is that machine learning algorithms are built around extremely concrete goals. Programs optimize themselves towards these goals. They are not abstract. There are algorithms for what statisticians refer to as "unsupervised learning", which is closely related to density estimation, but these too do not "learn" in the human sense. They merely identify patterns in unlabelled data within a set of tuned constraints. Often these algorithms are able to recover useful information, but your computational genomics software isn't going to start looking at a set of alleles and suddenly decide that it wants to get married to that sexy Japanese inflatable lovebot.

                            I want to emphasize that a computer that thinks and learns in any fashion similar to a human in either manner or flexibility isn't even something that is on the horizon. What the field really is is finding problems computers tend to find really tricky, coming up with a statistical way of modelling the input and the output, and trying to find a sufficient hammer for your ML or AI toolkit to whack the problem with. It's not like they are just turning a Naive Bayes program loose on the Harry Potter books in the hopes of getting the computer to figure out English.
                            If there is no sound in space, how come you can hear the lasers?
                            ){ :|:& };:

                            Comment


                            • Originally posted by kentonio View Post
                              There's lots of things you don't know about me.
                              Wait.

                              What did you do with all that barbecue sauce I gave you???
                              No, I did not steal that from somebody on Something Awful.

                              Comment


                              • Originally posted by Hauldren Collider View Post
                                Even machine learning, which has advanced in tremendous strides over the last decade, is not really learning as you or I would think of it. And to draw a contrast between how, for example, a human child would learn and how a machine learning algorithm would learn: Let's say you are walking down the street with your child and you see a dog. You point to the child and say, "doggy!" The child now has a very good idea of what a dog is, from a single positive example, and no negative examples. The child may then see a cat, say "doggy!" and you correct him and say "kitty!" The child is probably now able to recognize a dog even if it is a different size or a different breed and at any angle. By contrast, you could take the best machine learning algorithm with the latest heuristics and sophisticated statistical or nonparametric models like ANNs, random forests, whatever, and train it with 100,000 positive and negative examples and it'll maybe do better than a coin flip when you ask it "is there a dog in this picture?"
                                Reminds me of Chomsky's universal grammar theory. There's no reason to assume children are doing this without the help of programming.

                                Comment

                                Working...
                                X