Evolution of electronic computer systems

Mechanical computers had two serious drawbacks: computer speed was limited by the inertia of the moving parts, and the transmis­sion of information by mechanical means (gears, levers, etc.) was cumbersome and unreliable. In electronic computers, by contrast, the "moving parts" are electrons, and information can be transmitted by electric currents at speeds approaching the speed of light (300,000 km/s). The triode vacuum tube, invented in 1906 by Lee de Forest, enabled the switching of electrical signals at speeds far exceeding those of any mechanical device. The use of vacuum tube technology marked the beginning of the electronic era in computer design.

In the five decades since 1940, the computer industry has experi­enced four generations of development. Each computer generation is marked by a rapid change in the implementation of its building blocks: from relays and vacuum tubes (1940s-1950s) to discrete diodes and transistors (1950s-1960s), through small-scale and medium-scale inte­grated (SSI/MSI) circuits (1960s-1970s) to large-scale and very large scale integrated (LSI/VLSI) devices (1970s- 1980s). Increases in device speed and reliability and reductions in hardware cost and physical size have greatly enhanced computer performance. However, better devices are not the sole factor contributing to high performance. The division of computer system generations is determined by the device technology, system architecture, processing mode, and languages used. We are cur­rently (1989) in the fourth generation; the fifth generation has not mate­rialized yet, but researchers are working on it.
The First Generation

First-generation machines of the early 1950s were mostly pro­grammed in machine language.Machine language consists of strings of zeros and ones that act as instructions to the computer, specifying the de­sired electrical states of its internal circuits and memory banks. Obviously, writing a machine language program was extremely cumbersome, tedious, and time consuming. To make programming easier, symbolic languageswere developed. Such languages enable instructions to be written with symbolic codes (called mnemonics,or memory aids) rather than strings of ones and zeros. The symbolic instructions are then translated into corre­sponding binary' codes (machine language instructions). The first set of programs, or instructions, telling the computer how to do this translation was developed in 1952 by Dr. Grace M. Hopper at the University of Penn­sylvania. (She was also instrumental in making COBOL the ’’official" lan­guage of U.S. government computing.) After this breakthrough, most first- generation computers were programmed in symbolic language.

First-generation computers had a central processing unit (CPU)that contained a set of high-speed registers used for temporary storage of data, instructions, and memory addresses. The control unitde­coded the instructions, routed information through the system, and provided timing signals. Instructions were fetched and executed in two separate consecutive steps called the fetch cycleand execution cycle,respectively. Together they formed the instruction cycle.

First-generation computers had numerous design shortcomings, including these:

· Inefficient control of I/O operations resulted in poor overall sys­tem performance.

· Address modification schemes were inefficient.

· Because the instruction set was oriented toward numeric computa­tions, programming of nonnumeric and logical problems was difficult.

· Facilities for linking programs, such as instructions for calling subroutines that automatically save the return address of the calling program, were not provided.

· Floating-point arithmetic was not implemented, mainly due to the cost of the hardware needed.

Наши рекомендации