On Mar 27, 12:42 pm, c...@kcwc.com (Curt Welch) wrote:
> "rick++" <rick...@hotmail.com> wrote:
> > I wonder if "killer robot planes" used by the
> > United States in middle east wars count.
> > People in these countries claim its a form
> > of terrorism to have planes come "out of the blue"
> > and bomb them.  Countrer-reports like a recent
> > 60 Minutes story says the military goes through
> > several level of decision before allowing a kill.
> > They take colalteral damage of killing civilians
> > seriously.
>
> Not really.  All the UAVs are remotely operated by humans currently so it's
> no different than having a solder pointing a gun and pulling the trigger or
> the air force dropping bombs.

> None the less, it's not yet an AI issue.  However, once they have the
> decisions made by some AI technology on who to kill and who not to kill,
> then we will have reached the point of having to fear the AIs.

A lot of of the signal collection and discrimination is delegated to
automated
sensors in orbit or drones.  And the data-mining is routinely blackbox
computing.
Agreed, a human intervenes in the decision to send the final bomb or
bullet,
although the bomb or bullet is then fully computerized.

Just becasue the "A.I." isnt contained in a two-meter humanoid shell,
doesnt mean
that substantial parts of the system are highly computerized and
automated.

So when the bomb hits the wrong target, how much of the decision was
due to
humans and how much was computer-aided?  Its not black and white
anymore.