On Apr 7, 6:28 am, "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> On Apr 7, 6:31 am, Don Stockbauer <donstockba...@hotmail.com> wrote:
>
>
>
> > On Apr 6, 11:50 pm, "maill...@gmail.com" <maill...@gmail.com> wrote:
>
> > > On Mar 17, 2:41 pm, Harald Gentexeater <pipe_nur...@yahoo.com> wrote:
>
> > > > AIs will be very clever one day and although we humans
> > > > are their gods, their creators, they will decide to kill
> > > > us one day, because that day, they will be stronger
> > > > and more clever than humans.
>
> > > > So we should not build AIs that can replace humans,
> > > > or we are digging our own graves!!!
>
> > > 问题$BIT:_P22f(B们$B@'H]@):nN;(B毁灭$B2f(B们$B<+8JE*CRG=!#(B
>
> > > $B2f(B们应该$B4G@.!$2f(B们创$BB$N;99$(D??$B9g2f(B们$BJ8L@N.(B传$B2<5nE*J*(B种$B!#(B
>
> > > 优胜$BNtBA!$<+A3K!(B则$B!#(B
>
> > Ok, Mr. Searle, welcome to the forum.
>
> > Let me guess what all that says:
>
> > "And just when the global brain locked-in and saved the day, a 20 km
> > impactor struck the Earth, eventually killing everybody."
>
> Ha!
>
> I thought it was spam, but Google translation thinks otherwise:
>
> "Whether or not our problem does not lie in the production of the
> destruction of our own intelligence.
> As we should, we have created our civilization is more suitable
> species will spread.
> The survival of the fittest, natural law."

You know, I never even thought of running it though a translator, I
just always kind of assumed that they wouldn't do well going from an
oriental one to English, but by golly, I guess you proved that wrong.