"rick++" <rick303@hotmail.com> wrote:
> I wonder if "killer robot planes" used by the
> United States in middle east wars count.
> People in these countries claim its a form
> of terrorism to have planes come "out of the blue"
> and bomb them.  Countrer-reports like a recent
> 60 Minutes story says the military goes through
> several level of decision before allowing a kill.
> They take colalteral damage of killing civilians
> seriously.

Not really.  All the UAVs are remotely operated by humans currently so it's
no different than having a solder pointing a gun and pulling the trigger or
the air force dropping bombs.  The difference is that it's so easy to kill
without risking the life of the solder that the "decision process" is
dangerously one sided.  If you send an army in to do your killing for you,
you at least give the enemy a chance to kill a few solders before they die.
With the UAVs, the enemy is just slaughtered at no direct immediate risk to
the attacking force.  I can't image the fear such a technology must create
in the people on the receiving end that have no defense against it.
There's no doubt in my mind it's a clear and obvious form of terrorism,
even if the intent was not to be so.  The same is true for any strongly
one-sided advantage in any conflict.

None the less, it's not yet an AI issue.  However, once they have the
decisions made by some AI technology on who to kill and who not to kill,
then we will have reached the point of having to fear the AIs.  The US
military is certainly exploring those types of technologies.  However, the
political disasters that can result by killing the wrong target can be so
great, that I doubt we will see any AI technology good enough to put into
service in that way.  That is, other than some simple "look for any living
human and kill them all" type of stuff (which are basically just smart
bombs).

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/