Robot Warriors

This is going to be a reactionary post.  It may not be well informed.

So we have robots on patrol eh?  Well paintballs and pepper spray may be harmless, (though as I’m hypersensitive to flavour and smell it could quite literally kill, but I’m at the far end of the curve), but bullets are not, and it seems as though there will be a time shortly when robots are armed.  The Guardian says…

Before you scoff, however, take note that other, more fearsome robot soldiers are also on the way. Israel, for instance, is developing its own version, which goes by the name of the Guardium (no relation). This machine, which looks like an angry tractor, is being built by a company called G-Nius, and will also be highly autonomous, with similar features that enable it to move independently and shout at people. Indeed, it is reported to have been used on Israel’s borders already, and includes the potential for a live machine gun that can be programmed to return fire.

This means, to me, that there will be an autonomous thing on the surface of our world that we, we, have made, that has no moral value, no value at all except that of territory, that can kill people indiscriminately.

Why are we allowing this?

You see I know something about computing and Artificial Intelligence, (AI); one of the things I know is that the idea of intelligence combines many factors about environment, intent and evolution, and moral value.  This factors are not all present in practically any robot design today, there is only intent, and possibly environment.  On my undergraduate degree we were asked, once, to compare what we could do to what we should do.  For us it was in relation to security and banking; we saw that banking sites could be broken or invaded by many means, (This is why I don’t bank on the web, I do shop on the web), we could do that, should we do it?  No.  We should know about it so if we get a job in banking security, we know what the holes are.

This is known in the trade as being a “White Hat.”  There are “Grey Hats”, (In my secret heart I count myself among them, but I am not), who hack things and report them to firms because they like hacking things, but have no criminal intent, even though technically what they do is initally illegal.  They are more useful than the insiders working for companies directly, because they do things that are outside the system.  You can see that “Black Hats” do these things for gain, personal gain, or mischief, or any number of other reasons, but not for the benefit of others.

In this context it’s difficult to place these robots, are they programmed for the benefit of others?  Yes.  Should we be placing an autonomous or semi-autonomous device that recognises enough of its environment to kill humans in the wild, possibly with no intervention on the part of a human operator?  No.

No. No. No.

Some may ask, what is the difference between putting  soldier with a gun in the field to guard a stretch of fence and a robot doing the same thing?

Well for one thing, a soldier is human being with all the environmental, developmental, social and moral baggage that goes with it.  I know that soldiers often have to be, well, de-humanized, because it is a hard thing to kill someone, just to shoot someone is hard, that is why these situations are stressful, because being human is to be around humans, and to know and empathise with other humans, if we empathise too much we cannot take life, which is occasionally necessary, because there is conflict; but we have a duty to minimise this loss of life.  This stress, this reluctance of people to take life is why soldiers will often shoot away from a living target, (I can’t quote the study because I cannot find it, but this is not an academic article, so you will have to take it on faith.)

(You only have to look at a news article like this: to see that empathy and feeling fail sometimes.  The Isralis shoot at the Palestinians because the Palestinians shoot at the Isralis and so it continues, when I’m pretty sure that IN FACT most (99%) Palestinians and Isralis just want to get on with life, get the bins collected, do their shopping go to school, chat in a café without fear of getting shot or blown up.  There is a territory problem, don’t understand it, but surely there should be a better more peaceful way of figuring it out?  I know that people get very angry, anger leads to hate, hate leads to fear, and well, you get the idea…)

This autonomous machine has no morals, no values, no conscience, no empathy.  That’s fine if it’s just paintballs and pepper spray, but when it’s bullets, rubber or otherwise, then someone will die just because, well just because they were getting their football.

That isn’t acceptable to me.  We should all be rising up in protest, we should all be appalled, but maybe that won’t happen until this thing, having arrived with a whimper, goes bang, and someone dies at the hand of no-one but a machine.


A little footnote.

Everything I have just said applies to landmines too.  The only difference is that they don’t move.


As usual I’m a bad proof reader, look out for missed negations.

Leave a Reply