TEXT 7. Computer generations

The first generation (1951-1959). The first generation of computers is usually thought of as beginning with the UNIVAC I in 1951. First-generation machines used vacuum tubes, their speed was measured in milliseconds (thousandths of a second), and the data input and output was usually based on punched cards. First-generation computers typically filled a very large room, and were used primarily for research.

In early 1951 the first UNIVAC-1 became operational at the Census Bureau. When it displaced IBM punched card equipment at the Census Bureau, Thomas J. Watson, the son of IBM’s founder reacted quickly to move IBM into the computer age. The first computer acquired for data processing and record keeping by a business organization was another UNIVAC-1, installed in 1954 at General Electric’s Appliance Park in Louisville, Kentucky. The IBM 650 entered service in Boston in late 1954. A comparatively inexpensive machine for that time, it was widely accepted. It gave IBM the leadership in computer production in 1955.

In the period from 1954 to 1959, many businesses acquired computers for data processing purposes, even though these first-generation machines had been designed for scientific uses. Nonscientists generally saw the computer as an accounting tool, and the first business applications were designed to process routine tasks such as payrolls. The full potential of the computer was underestimated, and many firms used computers because it was the prestigious thing to do. But we shouldn’t judge the early users of computers too harshly. They were pioneering in the use of a new tool. They had to staff their computer installations with a new breed of workers, and they had to prepare programmes in a tedious machine language. In spite of these obstacles, the computer was a fast and accurate processor of mountains of paper.

The second generation (1959-1964). The invention of the transistor led to computers that were both smaller and faster. During this period they were about the size of a closet, and operated in microseconds (millionths of a second). Internal memory was magnetic, and magnetic tapes and disks as well as punched cards were used for input, output, and storage. Computers were still fairly specialized: although computers could now be used for business as well as scientific applications, one computer could not perform both tasks. The computers of the 2nd generation which began to appear in 1959, were made smaller and faster and had greater computing capacity. The practice of writing applications programmes in machine language gave way to the use of higher-level programming languages. And the vacuum tube, with its relatively short life, gave way to transistors that had been developed at Bell Laboratories in 1947 by John Bardeen, Willliam Shockley, and Walter Brattain.

The third generation and beyond. There is general agreement that the third generation began in 1964 with the introduction of the IBM System 360, which could handle both scientific and business computing. Computers shrank to the size of a large desk, and processing time shrank to nanoseconds (billionths of a second). Instead of individual transistors, as in the second generation, third-generation computers used integrated circuits, or ICs, which combined hundreds or even thousands of transistors on a single silicon chip. Instead of having a single operator and doing just one task at a time, the computer could work with different people giving them different tasks simultaneously.

Innovation and expansion have continued in the computer industry, but it is hard to tell a date of specific development which marked the end of the third generation and beginning of the fourth generation. Advances in chip design led to further modernisation, and ICs gave way to the microprocessor, the so-called «computer on a chip».

In the mid-1970s the personal computer revolution started, in the last few years more and more PCs have begun to be connected to other PCs and minicomputer systems.

Наши рекомендации