Playing games on computers was first made possible by the introduction of minicomputers in the late 1950s. Freed from the IBM punch card bureaucracy, programmers for the first time were able to explore the possibilities opened up by hands-on interaction with computers. Games were among the first programs attempted by the original "hackers," undergraduate members of MIT's Tech Model Railroad Club. The result, in 1962, was the collaborative development of the first computer game: Spacewar, a basic version of what would become the Asteroids arcade game, played on a $120,000 DEC PDP-1. (Levy, 1984; Wilson, 1992; Laurel, 1993) Computer designer Brenda Laurel points out this early recognition of the centrality of computer games as models of human-computer interaction:
Why was Spacewar the "natural" thing to build with this new technology? Why not a pie chart or an automated kaleidoscope or a desktop? Its designers identified action as the key ingredient and conceived Spacewar as a game that could provide a good balance between thinking and doing for its players.
They regarded the computer as a machine naturally suited for representing things that you could see, control, and play with. Its interesting potential lay not in its ability to perform calculations but in its capacity to represent action in which humans could participate (Laurel, 1993, p. 1).
As computers became more accessible to university researchers through the 1960s, several genres of computer games emerged. Programmers developed chess programs sophisticated enough to defeat humans. The first computer role-playing game, Adventure, was written at Stanford in the 1960s: by typing short phrases, you could control the adventures of a character trekking through a magical landscape while solving puzzles. And in 1970 Scientific American columnist Martin Gardner introduced Americans to LIFE, a simulation of cellular growth patterns written by British mathematician John Conway.