Work on AI began in the 1950's with researchers wanting to use a computer to model the way a human thinks. This new field quickly took off, but had difficulties due to the inability to produce anything that really resembled human intelligence.
Artificial intelligence is defined as the ability of a machine to think for itself.
Scientists and theorists continue to debate if computers will actually be able to think for themselves at one point (Liebowitz). The generally accepted theory is that computers will think and do more in the future. AI has grown rapidly in the last ten years chiefly because of the advances in computer architecture. The term artificial intelligence was actually coined in 1956 by a group of scientists having their first meeting on the topic (Liebowitz). Early attempts at AI were neural networks modeled after the ones in the human brain. Success was minimal because of the lack of computer technology needed to calculate such large equations.
AI is achieved using a number of different methods. The more popular implementations comprise of neural networks and fuzzy logic. Using any one of the aforementioned design structures requires a specialized computer system.
Computer systems may require greater speeds using super computers or workstations.
Even more exotic is the software that is used. Since there are very few applications that are pre-written using AI, each company has to write it's own software for the solution to the problem. An easier way around this obstacle is to design an add-on." (Barron 111) In order to tell that AI is present we must be able to measure the intelligence being used. For a relative scale of reference, large supercomputers can only create a brain the size of a fly. It is surprising what a computer can do with that intelligence...