"rick++" <rick303@hotmail.com> wrote:
> On Mar 27, 12:42 pm, c...@kcwc.com (Curt Welch) wrote:
> > "rick++" <rick...@hotmail.com> wrote:
> > > I wonder if "killer robot planes" used by the
> > > United States in middle east wars count.
> > > People in these countries claim its a form
> > > of terrorism to have planes come "out of the blue"
> > > and bomb them.  Countrer-reports like a recent
> > > 60 Minutes story says the military goes through
> > > several level of decision before allowing a kill.
> > > They take colalteral damage of killing civilians
> > > seriously.
> >
> > Not really.  All the UAVs are remotely operated by humans currently so
> > it's no different than having a solder pointing a gun and pulling the
> > trigger or the air force dropping bombs.
>
> > None the less, it's not yet an AI issue.  However, once they have the
> > decisions made by some AI technology on who to kill and who not to
> > kill, then we will have reached the point of having to fear the AIs.
>
> A lot of of the signal collection and discrimination is delegated to
> automated
> sensors in orbit or drones.  And the data-mining is routinely blackbox
> computing.
> Agreed, a human intervenes in the decision to send the final bomb or
> bullet,
> although the bomb or bullet is then fully computerized.
>
> Just becasue the "A.I." isnt contained in a two-meter humanoid shell,
> doesnt mean
> that substantial parts of the system are highly computerized and
> automated.
>
> So when the bomb hits the wrong target, how much of the decision was
> due to
> humans and how much was computer-aided?  Its not black and white
> anymore.

Yeah, if they are using very sophisticated AI data mining technology to
calculate the probability that a given target is a valid target (like face
recognition combined with other factors) (like they do in the movies), then
a huge percentage of the responsibility could in fact fall on the AI.  I
don't know what they are actually doing in that sense.

However, the final decision (I strongly suspect) is left to the human, so
the human can take all final responsibility of trusting, or not trusting,
the quality of the information the AI is giving him.  This is a very
different situation than if say, 5 AIs and 2 humans would "vote" on the
decision to attack or not attack which would create a true distributed
responsibility for the action and the AI risks that would come with giving
the AIs that much control over that much power.

It's no doubt, as you say, that our machines are playing an increasingly
important percentage in all our decision processes and as the machines get
smarter, we will have to watch out how much power we give them.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/