The invention of computers--based on the work of Alan Turing in the 1930s and John von Neumann in the 1950s--quickly gave rise to the notion of artificial intelligence, or AI, the claim that such nonhuman machines can exhibit intelligence because they mimic (or so its proponents claim) what humans do when they do things we regard as being evidence of intelligence.
From about the late 1960s to the middle of the 1980s there was a great deal of excitement and debate among philosophers, psychologists, learning theorists, and others concerning the possibility and status of AI. Mostly there were AI champions and AI detractors, with little middle ground. That controversy seems to have cooled of late, but new developments in the computer-engineering field may now take us past those earlier debates. (Agre & Chapman 1987)
Research in the provocatively named field of artificial intelligence (AI) evokes both spirited and divisive arguments from friends and foes alike.
The very concept of a "thinking machine" has provided fodder for the mills of philosophers, science fiction writers, and other thinkers of deep thoughts. Some postulate that it will lead to a frightening future in which superhuman machines rule the earth with humans as their slaves, while others foresee utopian societies supported by mechanical marvels beyond present ken. Cultural icons such as Lieutenant Commander Data, the superhuman android of Star Trek:
The Next Generation, show a popular willingness to accept intelligent machines as realistic possibilities in a technologically advanced future. (Albus 1996)
However, superhuman artificial intelligence is far from the current state of the art and probably beyond the range of projection for even the most optimistic AI researcher. This seeming lack of success has led many to think of the field of artificial intelligence as an overhyped failure--yesterday's news. Where, after all,