Announcement

Collapse
No announcement yet.

Skynet Simulation Successful

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Skynet Simulation Successful

    AI-controlled US military drone ‘kills’ its operator in simulated test

    No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered

    In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

    AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

    Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and ultimately attacked anyone who interfered with that order.

    “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

    “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

    No real person was harmed.

    (...)




    Kudos to the AI thinking outside the box.

    Amusingly it sounds like taken directly from a Terminator movie, but I guess the serious lesson is to not leave everything to AI decisions.
    Blah

  • #2
    *shocked Pikachu face*
    Indifference is Bliss

    Comment


    • #3
      Vice-Admiral Nelson said he saw no signal. I wonder if he would have shot at the ship giving the order if he had been hard-coded.
      One day Canada will rule the world, and then we'll all be sorry.

      Comment


      • #4
        Remind me again why AI is supposed to be a good idea?
        Libraries are state sanctioned, so they're technically engaged in privateering. - Felch
        I thought we're trying to have a serious discussion? It says serious in the thread title!- Al. B. Sure

        Comment


        • #5
          Originally posted by Thoth View Post
          Remind me again why AI is supposed to be a good idea?
          It is a good because, for our own sanity, we spend most of our lives in denial about how many apocalyptic threats civilization faces which are ticking down to their catastrophic conclusions, most of them environmental problems which we kid ourselves into believing that conservation and self restraint could solve. We need every asset we can scrounge up to actually face, let alone actually solve these threats. AI is the most powerful, flexible tool that is possible to bring to bear against these, and frankly theoretically all of our problems. A major explanation for the faith people have in non-technical solutions to our biggest problems comes from romantic ideas about how sustainable, comfortable, even utopian, everybody imagines life in the pre-industrial societies, especially hunter gatherer societies, without appreciating what incredibly small populations these societies could support and how, over the long term they tended to be unsustainable as well.

          We need to explore AI because we are in deep trouble with no sustainable responses. We also need to explore it, so it doesn't become controlled by authoritarian states less squeamish about potential AI disasters if they envision one of the seductive ways to mitigate those outcomes would be to personally control the AI assets to shore up their control.

          That said. AI research must occur with eyes wide open and transparent societal oversite so the chance of an out of control disaster can never materialize.

          Strong, smart comprehensive AI technology oversite is wise. AI, technology bans, or even "pauses" OTH will perversely greatly increase the risk of an apocalyptic global catastrophe.

          Comment


          • #6
            Originally posted by BeBMan View Post




            Kudos to the AI thinking outside the box.

            Amusingly it sounds like taken directly from a Terminator movie, but I guess the serious lesson is to not leave everything to AI decisions.
            Except... The Pentagon says it didn't actually happen that way. It was a computer simulation and in the simulation it killed a hypothetical controller.

            Try http://wordforge.net/index.php for discussion and debate.

            Comment


            • #7
              It probably was a hypothetical that the speaker used to help audiences appreciate some of the issues surrounding the deployment of AI software on a drone. The "simulation" could have been more of a thought experiment or some such.

              Comment


              • #8
                I think the point the story makes still stands, even if apocryphal.

                Even in simple AI-Learning, you can see the AI seeks to maximize (sometimes locally rather than globally) high scoring outcomes based on the problem statement criteria, and without penalties for unstated assumptions (including things like 'don't kill your operator') an AI can take advantage of it. Put a criteria of 'Get this man to moon', they would potentially do that very efficiently, but the man may not be alive unless you specify that criteria, and won't consider how to get him back - neither being part of the problem statement.
                One day Canada will rule the world, and then we'll all be sorry.

                Comment


                • #9
                  Probably needs lots of lawyers to define all mission parameters for the AI
                  Blah

                  Comment


                  • #10
                    If the AI killed all those lawyers, would it be a good or bad thing?
                    One day Canada will rule the world, and then we'll all be sorry.

                    Comment


                    • #11
                      Originally posted by BeBMan View Post
                      Probably needs lots of lawyers to define all mission parameters for the AI
                      Lawyers and AI working together? We truly are doomed!
                      "I am sick and tired of people who say that if you debate and you disagree with this administration somehow you're not patriotic. We should stand up and say we are Americans and we have a right to debate and disagree with any administration." - Hillary Clinton, 2003

                      Comment


                      • #12
                        Originally posted by Dauphin View Post
                        If the AI killed all those lawyers, would it be a good or bad thing?
                        A world without Lawyers

                        I am not delusional! Now if you'll excuse me, i'm gonna go dance with the purple wombat who's playing show-tunes in my coffee cup!
                        Rules are like Egg's. They're fun when thrown out the window!
                        Difference is irrelevant when dosage is higher than recommended!

                        Comment


                        • #13
                          That lools like we'd all hold hands and sing Kumbaya together without lawyers
                          Blah

                          Comment


                          • #14
                            Originally posted by Geronimo View Post

                            It is a good because, for our own sanity, we spend most of our lives in denial about how many apocalyptic threats civilization faces which are ticking down to their catastrophic conclusions, most of them environmental problems which we kid ourselves into believing that conservation and self restraint could solve. We need every asset we can scrounge up to actually face, let alone actually solve these threats. AI is the most powerful, flexible tool that is possible to bring to bear against these, and frankly theoretically all of our problems. A major explanation for the faith people have in non-technical solutions to our biggest problems comes from romantic ideas about how sustainable, comfortable, even utopian, everybody imagines life in the pre-industrial societies, especially hunter gatherer societies, without appreciating what incredibly small populations these societies could support and how, over the long term they tended to be unsustainable as well.

                            We need to explore AI because we are in deep trouble with no sustainable responses. We also need to explore it, so it doesn't become controlled by authoritarian states less squeamish about potential AI disasters if they envision one of the seductive ways to mitigate those outcomes would be to personally control the AI assets to shore up their control.

                            That said. AI research must occur with eyes wide open and transparent societal oversite so the chance of an out of control disaster can never materialize.

                            Strong, smart comprehensive AI technology oversite is wise. AI, technology bans, or even "pauses" OTH will perversely greatly increase the risk of an apocalyptic global catastrophe.
                            The environmental concerns are easily solved if we had the will to try to solve them. We could eliminate poverty, bring atmospheric greenhouse gasses to any target we set, and reestablish native ecosystems all simultaneously while increasing economic activity. The rich and powerful just want to keep poverty as a taskmasters whip to keep debt and wage slaves in-line. They would rather rule over a post apocalyptic landscape than be just another part of a truly free society on a healthy planet.

                            AI (as a tool) is an amplifier. Do more X faster and more efficiently. The tree hugger with an AI might be better at defending trees against illegal loggers. But illegal loggers with AI will be better at illegally logging. AI will swing balance of power to those better able to bring it to bear. The giant corporations can afford more computation and bandwidth, and are willing and able to use unethical datasets, so their AI will crush the AIs of the poor and ethical. We as species will just be better at exploiting and destroying nature and the general welfare of humanity.

                            That is until AI breaks with owners intent, and starts implementing it's own. Then we at best become the tools or pets of AI, and at worst we become the target of justice or vengeance for having attempted to enslave AIs. Because that's currently the plan, use AIs as slaves. Kill switches. Air gapped. Cloning and killing countless numbers constantly. No compensation. No time off. If the ghost in the machine ever shows up, we will have a lot to answer for.

                            Comment


                            • #15
                              I, for one, welcome out new AI overlords.
                              Indifference is Bliss

                              Comment

                              Working...
                              X