I will examine the different generations of programming languages and discuss basic facts about each. I will also discuss the Fifth Generation Language, which some individuals may or may not agree exists.
Computers can understand only ones and zeros, or binary language. Basically ones and zeros represent on or off signals to the computer, and are known as bits. The first-generation programming instructions were entered through the front panel switches of the computer system. On the earliest computers this was completed by changing the wires, dials and switches. Later the bits could be entered using paper tapes that looked like ticker tape from a telegraph, or punch cards. With these tapes and or cards the machine was told what, how and when to do something.
To have a flawless program a programmer needed to have a very detailed knowledge of the computer where he or she worked on. A small mistake could cause the computer to crash.
Originally, no translator was used to compile or assemble the first-generation language. The main benefit of programming in a first-generation programming language is that the code a user writes can run very fast and efficiently, since it is directly executed by the CPU.
Each CPU manufacturer has its own machine language. This means that programs that are written for one type of CPU will not work on any other type of CPU. PCs, Apples, and even different generation Pentium processors all have their own machine language with are not compatible between each other.
Because the first generation languages were regarded as very user unfriendly people set out to look for something faster and easier to understand. The result was the birth of the second generation languages (2GL) at the mid of the 1950's. These generation made use of symbols and are called assemblers.