Wednesday, February 11, 2015

Fear of AI

I read that Bill Gates, Elon Musk, and Steven Hawkings are worried about AI. I am not sure what the fear is exactly - that robots will take over the world in a realization of the Terminator story? 

Why would anybody listen to these three? None has worked in AI and only credulous people think that there is such a thing. Hate to disappoint you but AI is minimal - some Google Translating, a database query or two, Dragon Speech 20 years later, and some optical character recognition. What else? Don't count the facial recognition software you see in police TV shows. Perhaps some stock trading programs are more or less effective bags of tricks perhaps someone capitalizes on an observed correlation. Mostly it is not real intelligence.

I believe you can always build machines that will kill people and you can experiment with making the machines autonomous and especially dangerous. But there is no way to judge a machine as more intelligent than a human - who would devise the test? In the end, if it is not human then it is just mining equipment. So I think there is a fallacy in fearing [superior] AI - since there may never be any such thing.

Update: I asked my son George about this and he said he thought they were concerned about emergent phenomena in a complex system. I only realized later that it is unlikely that what emerges first would be a violent anti-human killing machine (or whatever). Since you are starting off with the AI of mud, I would expect the emergence to have to go through stages like evolution - a long time frame in which these complex systems were rewarded for increasingly dangerous behavior towards us.  
Update #2 The three people: Gates, Musk, Hawkings (Gates and Musk anyway) are examples of what America now gives far to much credit to - elite job creators. Considering them experts in all subject follows from a new idea that I hear expressed so frequently lately at the Woods Hole Friday night lecture: that if something makes money then it must be true. I want to call this the "idol of the bank" - which consists of errors arising from confusing financial success with wisdom.
Update #3  I may be wrong about AI not being possible beyond human intelligence in that "expensive mining equipment" can get pretty fancy. I was reading about a machine that reverse engineered how something like an amoeba optimizes some task. Pretty fancy mining all right.

No comments:

Post a Comment