Announcement

Collapse
No announcement yet.

What is the end of evolution?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Evolution is a process that results in heritable changes in a population spread over many generations.


    That is one of the definitions of evolution found at a What is Evolution site. Based on this definition, technological adaption is evolution. It is not biological evolution, certainly, but evolution none the less.
    I'm building a wagon! On some other part of the internets, obviously (but not that other site).

    Comment


    • #17
      Why would man as a species aim to be something like god? The notion is fine for individuals, but what about for a species? plus, man could never escape the laws of the unievrse he exists within. True Godhood is impossible. Normal evolution can at best get yo to a sentient being. Maybe they won;t be hairless apes, but they surely won't be better either.
      If you don't like reality, change it! me
      "Oh no! I am bested!" Drake
      "it is dangerous to be right when the government is wrong" Voltaire
      "Patriotism is a pernecious, psychopathic form of idiocy" George Bernard Shaw

      Comment


      • #18
        the epitome of biological life will eb the creation of mechanical life.

        their evolution knows no bounds.
        "I've lived too long with pain. I won't know who I am without it. We have to leave this place, I am almost happy here."
        - Ender, from Ender's Game by Orson Scott Card

        Comment


        • #19
          I doubt it's possible to create a pure mechanical intelligence capable of the kind of thought processes that living beings are capable of. At most, I think that there would be cybernetic beings... biological brains and integrated cybernetical parts.

          Computers, as we know them, aren't possible of intelligent reasoning and planning. And as advanced as they may get, they're just really powerful adding machines that do what a programmer tells them to do.
          To us, it is the BEAST.

          Comment


          • #20
            Originally posted by Sava
            I doubt it's possible to create a pure mechanical intelligence capable of the kind of thought processes that living beings are capable of. At most, I think that there would be cybernetic beings... biological brains and integrated cybernetical parts.

            Computers, as we know them, aren't possible of intelligent reasoning and planning. And as advanced as they may get, they're just really powerful adding machines that do what a programmer tells them to do.
            WRONG. common misconception.

            Biases and Heuristics: Gods Hard Coding, and Modular Intelligence
            by Joseph Moskie

            The study of the mind, and the related fields of human cognition, rational decision making, and artificial intelligence are relatively new fields in the realm of science. As these emerging fields have progressed, they have often mixed their methodology and their data, building on each other’s work. This interdependence has proven to be a double edged sword for these sciences, for although it allows the fields to advance relatively quickly compared to the older sciences, it often ends up with one of the fields waiting on the others for more information before it itself can advance. The field of artificial intelligence has come a long way in a relatively short period of time, but it can only go so far with our current understanding of the human mind, human cognition, and the rational decision making progress.

            One of the most often used arguments against artificial intelligence is that essentially, no matter how far you break it down, you are “hard coding” rules into the artificial agent, which defeats the process of cognition itself. However, emerging research suggests that humans themselves base their own intelligence on simple rules that are innate in our very being. Kahneman and Tversky have proposed a system of “biases and heuristics” in human cognition that serve the same purpose that “hard coded” rules would serve in artificially intelligent agents, creating a basic cognitive framework from where more complex systems of intelligence and decision making could be developed. If basic human cognition is in fact based on a system of “biases and heuristics”, no matter how dynamic or malleable they may be, we can be justified in creating such systems in artificial agents, and building a more complex cognitive framework for the agent based on the bare-bone rules.

            One should note the emphasis on the words “dynamic” and “malleable”, for the biases and heuristics in human thought probably won’t boil down to a simplistic if-then clause, ie. “If A then B”, but could end up being rather complex logical operations, with many interdependencies and conditionals. The claim that the biases and heuristics should be malleable implies that they should not be at all ridged; the set of rules governing cognition and decision-making should be easily controlled, easily influenced, able to adjust to changing circumstances. That is, to be completely frank, what cognition is all about.

            Artificial Intelligence, as it has been presented to the general populous (in implementation, not research or theory) has come in three distinct forms: games, basic “intelligent” agents (PERI), and the apocalyptic death swarms portrayed in movies such as “Terminator”, “The Matrix”, and their kin. Ignoring the last one on the basis of “Hollywood v. Reality”, the other two have given off very skewed, very negative images of what artificial intelligence is and can be. The first, gaming, can be seen in blockbuster game hits such as “Sid Meier’s Civilization”, “Half Life”, etc, and also on a larger scale, ie. “Deep Blue”, the chess playing computer, that is sometimes able to beat its best human counterparts. In both cases, however, the artificial agents exist in a world with a very ridged set of rules, with very few variables (relative to a sentient being existing in the real world), and are focused on a very narrow set of tasks or goals. Deep Blue is an excellent chess player, yes, but Deep Blue did not struggle to learn how to play chess, in fact, Deep Blue is the brainchild of some of the best chess playing computer scientists in the world. Deep Blue has perpetuated the belief that artificial intelligence is all about “hard coding” the rules of a situation into the agent, and nothing more. When you boil it down, Deep Blue is not artificial intelligence, as it’s often passed off as; it’s simply a chess playing machine. It is not intelligent; it is skilled at its task. A seemingly “intelligent” task, but a task nonetheless.

            Last semester I was exposed to an on-campus research project involving an “intelligent” robotic arm named PERI, an acronym for “Psychometric Experimental Robotic Intelligence”. This “intelligent” agent has been spawned from the brains of our own Dr. Selmer Bringsjord, the chair of Rensselaer’s Cognitive Science department, and a graduate student, Bettina Schimanski. At first glance, it seems remarkable; these people claim that PERI has successfully passed an IQ test, flawlessly. Then you read into it more, and find out that PERI has passed part of an IQ test, one section to be exact, and that the section it passed was rotating blocks to create patterns. Now, in my humble opinion, this seems more like a simple game than an act of pure intelligence. In fact, PERI even has the added bonus of being given all the information (ie, what images are on what side of the block) the nanosecond the test begins, whereas a human would have to actively explore the environment, and manipulate the blocks to discover what is where. Even after the human does that, PERI still has an advantage over the human, as PERI is allowed to label the sides and associate the image with the label, something the human is more than likely unable to keep track of in their head. It may sound speciest, to make such a claim (and I will address the issue of species centric thought later), but it comes down to the fact that PERI can win this “game” faster than a human can (Selmer brags that PERI can solve the puzzle in under a second, then takes quite a while to physically manipulate the blocks) doesn’t mean that PERI is capable of applying its skills to other cognitive operations. The two definitions of intelligence I have found that seem most concise are “having capacity for thought and reason, especially to a high degree”, and “exercising and showing good judgment”. In my personal connotation of intelligence, I put a lot of value on the conscious mind's flexibility, its versatility, the ability of the mind to apply itself to a multitude of tasks. In my opinion, any agent that can perform only one task successfully, no matter how “intelligent” a task it may be, cannot be said to possess intelligence.

            We, as humans, have derived our denotations and connotations of the word “intelligence” solely on our species, and with good reason. For millennia, we have been the only “intelligent” being we are aware of, and human science is based on human experiences and human “fact”. This has been termed the “speciest fallacy”, or the “species centric fallacy”, whereas we, the “definitive example of intelligence”, want artificially intelligent agents to “jump through hoops”, and perform intelligent tasks on a level comparable to that of a human. One must look to the sensory devices many of today’s artificial agents have been given with which to perceive the world, somewhat crude when compared to the human sensory receptors. If we are to today say that artificial agents are not intelligent due to perceptual limitations that affect their cognitive processes negatively, then what will the artificial agents say when their sensory devices advance to a point much further than those of their human counterparts? As more advanced machines begin to evolve, we must look at them objectively when considering their intelligence, their sentience, despite the various shortcomings that may arise.

            Now that I have described what Artificial Intelligence is today, how it is perceived, what it could one day evolve to, and how we should be objective when evaluating non-humanoid intelligence, it is time to describe exactly how an advanced artificial intelligence can be achieved. Daniel Kahneman and the late Amos Tversky had proposed a system of Biases and Heuristics, and applied their theory mainly to economics, especially in the areas of decision-making associated with risks. One of the classic examples they cite was preformed by Daniel Bernoulli (Bernoulli 1738), where he discovered that if you offered a person an 85% chance of winning $1,000 (with a 15% chance of winning nothing at all), or the option of definitely receiving $800, most of the people would choose the $800, despite the fact that .85 x $1,000 + .15 x $0 = $850, which is more than $800. This implies that there is something in human cognition that practices what he termed “risk aversion”, and he even went as far as to propose a function of “selective value”, or “utility”, is a concave function to money itself. Now, this is all well and good in the realm of economics, but again it appears that we are drawing nearer to a “specialized task”, which is the exact opposite of what Artificial Intelligence is supposed to be about, but that is only the appearance. Bernoulli’s concept of risk assessment, and what Kahneman and Tversky have added to it in the form of “Framing of Outcomes” can be abstracted to many non-economic problems relating to decision making in general, where several options are examined and thought of in terms of their possible outcomes. Kahneman and Tversky also go into many of the fallacies people commit, causing them to improperly weigh their options, which is a good read, but not directly related to artificial intelligence, unless your goal is to aid them in avoiding the fallacies innate in the human mind.

            Regressing back to the original topic of Biases and Heuristics as proposed by Kahneman and Tversky, and taking it away from the economical context in which it was proposed, we end up with a theory that essentially states that we can take any abstract human cognitive process, and break it down continuously until we reach some system of rules that humans have “hard coded” into them. The problem with that theory is that it is extremely difficult to articulate exactly what that system of rules is, or how they work together to create the desired result. As stated above, the rules created by innate biases and heuristics are most likely extremely dynamic and malleable, completely adaptive to near any situation, and it is that fact that makes the rules so versatile, and at the same time so complex, and vicariously, so difficult to explore and define. What we have currently regarding cognitive frameworks is not much different than what we have had for many years now, a “Black Box” scenario where we know something is going on, but we simply cannot explain exactly what is happening. We can diagram it as Input à “Black Box” à Output. Although we currently cannot elaborate as to the exact functions of the box, we have proposed a theory that, after extensive study in the field, could eventually allow us to break down the “Black Box” even further into its basic components, its fundamental rules and functions. Once we have uncovered “the secret of the Black Box of human cognition”, artificial intelligence based on the same basic algorithms couldn’t be far behind.

            Moving away from Kahneman and Tversky’s research on biases and heuristics, I’d like to discuss another issue regarding intelligence and thinking, namely modular intelligence. This has been discussed for many years under a variety of names (modular intelligence, levels of cognition, high/low-level cognition), and is still a hot topic in the fields of artificial intelligence and human cognition. The clearest way to describe what modular intelligence refers to is to first provide an example. The human sense of sight is a good example, and it has been professed by some to be modular intelligence, in the sense that the optic system itself makes decisions and acts intelligently independently of the brain and conscious thought. This can be perceived as “lower level thought”, where the optic system is “intelligent” to a lesser degree than the agent itself, and the agent unconsciously uses the “intelligence functions” of the optic system to reduce the amount of work that the conscious mind must deal with (J.J. Gibson, 1960). Gibson has argued that many visual properties (for example, texture) can be recognized by “low-level psychophysiological mechanisms, functioning without any control from high-level schemata or cerebral models”. This is truly a profound statement, with even more profound implications for artificial intelligence. This statement allows for a theory that essentially states that a human’s overall intelligence is actually the sum total of the work of smaller sub-systems that pass on filtered information to the conscious thought, which coincides with the fact that with artificial intelligence, there would be too much information coming in to deal with effectively. Rather than have the “main cognitive engine” of the artificially intelligent agent working on trying to “understand” the raw information from the environment, we can create “quasi-intelligent” sub-systems that are, as many professed “intelligent” systems are today, skilled at one task (or, optimally, good at a small number of specific, related, tasks). According to this theory, we would be justified in creating many interdependent sub-systems to manage and manipulate the raw data over and over again before the “main cognitive engine” even got a glance of it, forming the data into simplified, easily manipulated chunks, allowing the “main cognitive engine” to reserve the main CPU cycles for higher level cognition and decision making.

            The emerging field of Artificial Intelligence is closely intertwined with the studies of the human mind, human cognition, and rational decision making, and will forever be. How can we claim to be able to go out and create artificial intelligence when we still cannot explain the basic foundations of our own cognitive frameworks? Simply put, we cannot. We must first strive to understand the mysteries of human cognition, to break down the “Black Box” into its basic components, whether they be biases, heuristics, or something completely different, and we must understand, before we can even begin to design a truly intelligent agent.
            "I've lived too long with pain. I won't know who I am without it. We have to leave this place, I am almost happy here."
            - Ender, from Ender's Game by Orson Scott Card

            Comment


            • #21
              You bastard.
              <p style="font-size:1024px">HTML is disabled in signatures </p>

              Comment


              • #22
                Originally posted by loinburger
                You bastard.
                why thank you.
                "I've lived too long with pain. I won't know who I am without it. We have to leave this place, I am almost happy here."
                - Ender, from Ender's Game by Orson Scott Card

                Comment


                • #23
                  The end of evolution is when we discover a better theory.
                  Some cry `Allah O Akbar` in the street. And some carry Allah in their heart.
                  "The CIA does nothing, says nothing, allows nothing, unless its own interests are served. They are the biggest assembly of liars and theives this country ever put under one roof and they are an abomination" Deputy COS (Intel) US Army 1981-84

                  Comment


                  • #24
                    As long as there is life then there is no end to evolution.
                    Try http://wordforge.net/index.php for discussion and debate.

                    Comment


                    • #25
                      As long as there is life then there is no end to evolution.
                      And as long as environments continue to change. Without change in the envrionment, there would be no need for evolution.
                      "Beauty is not in the face...Beauty is a light in the heart." - Kahlil Gibran
                      "The greatest happiness of life is the conviction that we are loved; loved for ourselves, or rather, loved in spite of ourselves" - Victor Hugo
                      "It is noble to be good; it is still nobler to teach others to be good -- and less trouble." - Mark Twain

                      Comment


                      • #26
                        the problem with humanity is that we make machines that "evolve" for us.
                        "I've lived too long with pain. I won't know who I am without it. We have to leave this place, I am almost happy here."
                        - Ender, from Ender's Game by Orson Scott Card

                        Comment


                        • #27
                          Originally posted by Kirnwaffen
                          And as long as environments continue to change. Without change in the envrionment, there would be no need for evolution.
                          Sure there would be because you can still have genetic drift in a static enviroment. Genetic drift is a key part of evolution.
                          Try http://wordforge.net/index.php for discussion and debate.

                          Comment


                          • #28
                            Originally posted by Sava
                            Computers, as we know them, aren't possible of intelligent reasoning and planning. And as advanced as they may get, they're just really powerful adding machines that do what a programmer tells them to do.
                            The brain is just a complicated computer full of firing neurons rather than bitstreams. It's possible for them to emulate one another, but at differing speeds.

                            The difference is, computing speed doubles every 18 months and human intelligence is on the decline.
                            "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                            Ben Kenobi: "That means I'm doing something right. "

                            Comment


                            • #29
                              Sure there would be because you can still have genetic drift in a static enviroment. Genetic drift is a key part of evolution.
                              Actually, thinking about it, you'd probably get something cyclical. Over thousands of years, one organism becoming too dominant for its own good, then dying out, then the whole process starting over again. Might be interesting to watch.
                              "Beauty is not in the face...Beauty is a light in the heart." - Kahlil Gibran
                              "The greatest happiness of life is the conviction that we are loved; loved for ourselves, or rather, loved in spite of ourselves" - Victor Hugo
                              "It is noble to be good; it is still nobler to teach others to be good -- and less trouble." - Mark Twain

                              Comment


                              • #30
                                if you've got the time.
                                "I've lived too long with pain. I won't know who I am without it. We have to leave this place, I am almost happy here."
                                - Ender, from Ender's Game by Orson Scott Card

                                Comment

                                Working...
                                X