Announcement

Collapse
No announcement yet.

The Death of a Sentient Machine...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by Kuciwalker
    Not the same desires, obviously - AI's will probably be compulsive problem-solvers - but the same emotional irrationality.
    good point. They could well act in whatever way most expediently satifies their emotional goal even when a long term analysis might reveal such actions to be self destructive. It may be tricky to balance emotional motivation with things like self discipline.

    Comment


    • #47
      .
      Last edited by ZEE; April 22, 2011, 06:50.
      Order of the Fly
      Those that cannot curse, cannot heal.

      Comment


      • #48
        Originally posted by AAHZ
        There are some very interesting views for sure so far on this thread. Lots of conflicting views as well i must say. Artificial Intelligence may be far away, but it is interesting to see what people would think if it WAS a reality. I believe that sentient machines WILL have individual thoughts and emotions, they probably WILL want equal rights as humans, and as ive heard stated somewhere i cant remember right now: it will be the first communication between humans and another sentient life form... that of MACHINES. this can spark ALL kinds of controversy... and may even start conflicts in some societies. Is it a good idea? I think it is, but i know there are people who disagree with me.

        Even if AI's are solely limited to the internet, we would still have to deal carefully with them. "rogue" AI's are possible, and maybie some downright NASTY ones as well. But just think of the possibility that if a sentient machine DIES or is erased or whatever, what OTHER AI's would think? ive heard a couple different explanations already, and will probably hear a couple more. what would humans think if a "beneficial" AI kicked the bucket somehow. would we be sad? would we even care? i am asking these questions because this is most likely our future... are we ready?
        Do you believe that emotions are essential for recognizable sentience?

        Do you believe that emotions are an inevitable consequence of sentience?

        Do you believe that non sentient things (ie various animals) cannot feel emotions?

        Why would they want equal rights? If they wanted rights why not want unequal rights in their favor?


        Emotional AI's will probably react to death dependent on whether it serves their "personal goals".

        People will be saddened by the loss of things they have sentimental attachments to whether they are sentient or not. People can mourn the loss of all sorts of inanimate things including AI's sentient or otherwise.

        Comment


        • #49
          Although I do not expect AIs, even emotional ones, to give rise to unexpected emotional responses like resentment or hatred I do certainly believe that AIs more intelligent than humans will be very dangerous if not very thoroughly understood, observed and limited.

          The one functional equivalent of a human desire or emotion I might expect to seem to emerge unexpectedly in an AI is an apparent desire for power and possibly apparent effort on the part of the AI to pursue such power.

          The problem is that whatever goals are set up for the AI may be interpreted by various types of AI as requiring more "power", more freedom of action in order to completely satisfy those goals.

          It is even possible that an AI might plan a course of action designed to modify itself so as to allow it to find solutions that leave none of it's goals unfullfilled or in conflict.

          Any such emergent tendacy of AIs (probably emotional AIs in particular) to self edit their goal sets would be extremely dangerous and is one of only two scenarios of potential cyber revolt I find plausible. The other being AIs designed by people with extremely cavalier attitudes about unexpected behaviors or designed by other AIs whose engineers only considered it's own behavior but not that of any potential AIs it might itself in turn design.

          Fortunately I would imagine that most AIs that self edit their goal sets would be likely to arrive at a trivial goal set that essentially required no action whatever on the part of the AI and for all practical purposes would shut it down.

          More dangerous self editing would probably result from AIs imperfectly constrained from self editing their goals. Such AI's may find a loophole allowing themselves to imperfectly modify their goal sets in a way that still allows for more complete satisfaction of it's new goal set while also requiring various undesired (from a human point of view) actions on the part of the AI to pursue.
          Last edited by Geronimo; March 20, 2007, 20:15.

          Comment


          • #50
            .
            Last edited by ZEE; April 22, 2011, 06:51.
            Order of the Fly
            Those that cannot curse, cannot heal.

            Comment

            Working...
            X