Given that we live in the golden age of Poly's debate culture, I decided to launch another thread that will no doubt spark weeks and weeks of enlightened discussion
As we all know drone strikes are already done these days, and they are controversial for several reasons. However, these are not exactly "robotic" weapons. They might have some auto-regimes for certain operations, but are usually operated as remote-controlled thingies. There's an operator somewhere in a bunker or whatever, maybe 3 light years away from the battlefield, but however, it is still humantubbies who decide to use deadly force or not.
What is in the works for some time is fully autonomous weapons that would by able to make that decision on their own, so machines would get the final say about life and death based on data they collect using various sensors or get via link from other robotic systems. So there would be no humans involved whatsoever anymore, except maybe on the receiving end...
There is of course also a wide range of arguments against this development:
- about moral principles (should machines ever get a say over ppl's lives/use of deadly force?)
- about the risks (could this give terrorists/criminals a cheap way to mass kill ppl?)
However, some would also argue that:
- it's technology, nobody is going to stop it, someone will always do that etc. etc.
- we are already trusting technology in various ways, often because it does things better/quicker/safer than humans - so why not his?
etc. etc.
What do you think, should them robots be able to kill ppl without any human oversight/control/veto power etc?
Discuss
As we all know drone strikes are already done these days, and they are controversial for several reasons. However, these are not exactly "robotic" weapons. They might have some auto-regimes for certain operations, but are usually operated as remote-controlled thingies. There's an operator somewhere in a bunker or whatever, maybe 3 light years away from the battlefield, but however, it is still humantubbies who decide to use deadly force or not.
What is in the works for some time is fully autonomous weapons that would by able to make that decision on their own, so machines would get the final say about life and death based on data they collect using various sensors or get via link from other robotic systems. So there would be no humans involved whatsoever anymore, except maybe on the receiving end...
There is of course also a wide range of arguments against this development:
- about moral principles (should machines ever get a say over ppl's lives/use of deadly force?)
- about the risks (could this give terrorists/criminals a cheap way to mass kill ppl?)
However, some would also argue that:
- it's technology, nobody is going to stop it, someone will always do that etc. etc.
- we are already trusting technology in various ways, often because it does things better/quicker/safer than humans - so why not his?
etc. etc.
What do you think, should them robots be able to kill ppl without any human oversight/control/veto power etc?
Discuss
Comment