On Mar 29, 11:00 pm, c...@kcwc.com (Curt Welch) wrote:
> Don Stockbauer <donstockba...@hotmail.com> wrote:
> > On Mar 28, 3:01=A0pm, "rick++" <rick...@hotmail.com> wrote:
> > > On Mar 27, 12:42 pm, c...@kcwc.com (Curt Welch) wrote:
>
> > > > "rick++" <rick...@hotmail.com> wrote:
> > > > > I wonder if "killer robot planes" used by the
> > > > > United States in middle east wars count.
> > > > > People in these countries claim its a form
> > > > > of terrorism to have planes come "out of the blue"
> > > > > and bomb them. =A0Countrer-reports like a recent
> > > > > 60 Minutes story says the military goes through
> > > > > several level of decision before allowing a kill.
> > > > > They take colalteral damage of killing civilians
> > > > > seriously.
>
> > > > Not really. =A0All the UAVs are remotely operated by humans currently
> > > > s=
> > o it's
> > > > no different than having a solder pointing a gun and pulling the
> > > > trigge=
> > r or
> > > > the air force dropping bombs.
> > > > None the less, it's not yet an AI issue. =A0However, once they have
> > > > the decisions made by some AI technology on who to kill and who not
> > > > to kill=
> > ,
> > > > then we will have reached the point of having to fear the AIs.
>
> > > A lot of of the signal collection and discrimination is delegated to
> > > automated
> > > sensors in orbit or drones. =A0And the data-mining is routinely
> > > blackbox computing.
> > > Agreed, a human intervenes in the decision to send the final bomb or
> > > bullet,
> > > although the bomb or bullet is then fully computerized.
>
> > > Just becasue the "A.I." isnt contained in a two-meter humanoid shell,
> > > doesnt mean
> > > that substantial parts of the system are highly computerized and
> > > automated.
>
> > > So when the bomb hits the wrong target, how much of the decision was
> > > due to
> > > humans and how much was computer-aided? =A0Its not black and white
> > > anymore.
>
> > Must be nice to be able to kill safely from a sequestered location.
> > Let's hope that such power never falls into irresponsible hands.
>
> My understanding is that it's fairly easy to buy things like sniper rifles
> here in VA (the home of the NRA). :)
>
> We had the beltway sniper(s) killing 11 people a few years back using just
> such technology (kill at a safe distance) technology. :)  Though it's not
> as powerful as a UAV with rockets, it's just as deadly.  The bottom line,
> is that bad things do happen, but we deal with the problem long before we
> have global melt downs, and in the end, not many people die (per the big
> picture).
>
> I think all the dangers of AI will work the same as the dangers of all past
> technologies.  People will worry about it. Some mistakes will be made from
> not worrying enough, people will be harmed, or killed, we will learn from
> each mistake, and add a little more protection as needed to reduce the odds
> of it getting worse.  Life goes on.  We use the AI as best we can without
> the risk getting too large.
>

What's the problem (er, issue)?  The Global Brain has to find some way
to keep its population under control.