Saturday, November 27, 2021

Types or classification of computers - 1

The computer has passed through many stages of evolution from the days of the mainframe computers to the era of microcomputers. Computers have been classified based on different criteria. In this post, we shall classify computers based on four popular methods.

Objectives

The objectives of this post are to:

i. Classify computers based on size, type of signal, operations and purpose.

ii. Study the features that differentiate one class of the computer from the others.

Categories of Computers

Although there are no industry standards, computers are generally classified in the following ways:

Based on operation or working principle or signal type

There are basically three types of electronic computers. These are the Digital, Analog and Hybrid computers.

Analog computer

It measures rather than counts. This type of computer sets up a model of a system. Common type represents it variables in terms of electrical voltage and sets up circuit analog to the equation connecting the variables. The answer can be either by using a voltmeter to read the value of the variable required, or by feeding the voltage into a plotting device. They hold data in the form of physical variables rather than numerical quantities. In theory, analog computers give an exact answer because the answer has not been approximated to the nearest digit. Whereas, when we try to obtain the answers using a digital voltmeter, we often find that the accuracy is less than that which could have been obtained from an analog computer.

It is almost never used in business systems. It is used by the scientist and engineer to solve systems of partial differential equations. It is also used in controlling and monitoring of systems in such areas as hydrodynamics and rocketry; in production. There are two useful properties of this computer once it is programmed:

1. It is simple to change the value of a constant or coefficient and study the effect of such changes.

2. It is possible to link certain variables to a time pulse to study changes with time as a variable, and chart the result on an X-Y plotter.

Analog computers are used to process continuous data. Analog computers represent variables by physical quantities. Thus any computer which solve problem by translating physical conditions such as flow, temperature, pressure, angular position or voltage into related mechanical or electrical related circuits as an analog for the physical phenomenon being investigated in general it is a computer which uses an analog quantity and produces analog values as output. Thus an analog computer measures continuously. Analog computers are very much speedy. They produce their results very fast. But their results are approximately correct. All the analog computers are special purpose computers.

In analog computers, data is recognized as a continuous measurement of a physical property like voltage, speed, pressure etc. Readings on a dial or graphs are obtained as the output, ex. Voltage, temperature; pressure can be measured in this way.

An analog computer is a form of computer that uses continuous physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved.

E.g.: Thermometer, Speedometer, Petrol pump indicator, Multimeter



Figure: Thermometer, Speedometer




Figure: Petrol pump indicator, Multimeter




These systems were the first type to be produced. It is an electronic machine capable of performing arithmetic functions on numbers which are represented by some physical quantities such as temperature, pressure, voltage, etc. Analog refers to circuits or numerical values that have a continuous range. Popular analog computer used in the 20th century was the slide rule.

According to the Merriam Webster Dictionary, computers in which continuously variable physical quantities, such as electrical potential, fluid pressure, or mechanical motion, are used to represent (analogously) the quantities in the problem to be solved are called analog computers.

Digital computer

Represent its variable in the form of digits. It counts the data it deals with, whether representing numbers, letters or other symbols, are converted into binary form on input to the computer.

The data undergoes a processing after which the binary digits are converted back to alpha numeric form for output for human use. Because of the fact that business applications like inventory control, invoicing and payroll deal with discrete values (separate, disunited, discontinuous); they are beset processed with digital computers. As a result of this, digital computers are mostly used in commercial and business places today.

Figure: Digital Computer

Digital computer represents physical quantities with the help of digits or numbers. These numbers are used to perform Arithmetic calculations and also make logical decision to reach a conclusion, depending on, the data they receive from the user.

These are high speed electronic devices. These devices are programmable. They process data by way of mathematical calculations, comparison, sorting etc. They accept input and produce output as discrete signals representing high (on) or low (off) voltage state of electricity. Numbers, alphabets, symbols are all represented as a series of 1s and 0s.

A computer that performs calculations and logical operations with quantities represented as digits, usually in the binary number system.

Virtually all modern computers are digital. Digital refers to the processes in computers that manipulate binary numbers (0s or 1s), which represent switches that are turned on or off by electrical current. A bit can have the value 0 or the value 1, but nothing in between 0 and 1. A desk lamp can serve as an example of the difference between analog and digital. If the lamp has a simple on/off switch, then the lamp system is digital, because the lamp either produces light at a given moment or it does not. If a dimmer replaces the on/off switch, then the lamp is digital, because the amount of light can vary continuously from on to off and all intensities in between. Digital computers are more common in use and it will be our focus of discussion.

These computers deal with data in the form of numbers. They mainly operate by counting and performing arithmetic & logical operations on numeric data. Such computers are ‘many problems’ oriented.

Hybrid computer [Analog + Digital]

In some cases, the user may wish to obtain the output from an analog computer as processed by a digital computer or vice versa. To achieve this, he set up a hybrid machine where the two are connected and the analog computer may be regarded as a peripheral of the digital computer.

In such a situation, a hybrid system attempts to gain the advantage of both the digital and the analog elements in the same machine. This kind of machine is usually a special-purpose device which is built for a specific task. It needs a conversion element which accepts analog inputs, and output digital value. Such converters are called digitizers. There is need for a converter from analog to digital also. It has the advantage of giving real-time response on a continuous basis. Complex calculations can be dealt with by the digital elements, thereby requiring a large memory, and giving accurate results after programming. They are mainly used in aerospace and process control applications.

Figure: Hybrid Computer

Various specifically designed computers are with both digital and analog characteristics combining the advantages of analog and digital computers when working as a system. Hybrid computers are being used extensively in process control system where it is necessary to have a close representation with the physical world.

The hybrid system provides the good precision that can be attained with analog computers and the greater control that is possible with digital computers, plus the ability to accept the input data in either form.

Hybrid Computers are a combination of Analog and Digital computers. They combine the speed of analog computers and accuracy of digital computers. They are mostly used in specialized applications where the input data is in an analog form i.e. measurement. This is converted into digital form for further processing. The computers accept data from sensors and produce output using conventional input/output devices.

A combination of computers those are capable of inputting and outputting in both digital and analog signals. A hybrid computer system setup offers a cost effective method of performing complex simulations. The instruments used in medical science lies in this category.

This is when a computer make is of both analog and digital components and techniques. Such computer require analog to digital and digital to analog converter which will make analog and digital data palatable to it.

Digital computers could not deal with very large numbers and so, a computer with characteristics of both analog and digital was created which was known as Hybrid computer.

Post activity:

In this post we covered types or classification of computer in detail. The topic is very tricky so repetition of sentences is observed but trust me they are there for some purpose [they convey other aspects of the topic with same sentence structure].

If more detailed information is needed please browse or search the internet for above terms. All images are from Google search.

Keywords: Computer, type or classification of computer.

…till next post, bye-bye and take care.

For table of content click here

Friday, November 26, 2021

Profile of computer generations

The computer has evolved from a large—sized simple calculating machine to a smaller but much more powerful machine. The evolution of computer to the current state is defined in terms of the generations of computer. Each generation of computer is designed based on a new technological development, resulting in better, cheaper and smaller computers that are more powerful, faster and efficient than their predecessors. Currently, there are five generations of computer. In the following subsections, we will discuss the generations of computer in terms of—

1. The technology used by them (hardware and software),

2. Computing characteristics (speed, i.e., number of instructions executed per second),

3. Physical appearance, and

4. Their applications.

                                                                    

First Generation (1940 to 1956): Using Vacuum Tubes

Hardware Technology: The first generation of computers used vacuum tubes (Figure) for circuitry and magnetic drums for memory. The input to the computer was through punched cards and paper tapes. The output was displayed as printouts.

Figure: Vacuum tube

Software Technology: The instructions were written in machine language. Machine language uses 0s and 1s for coding of the instructions. The first generation computers could solve one problem at a time.

Computing Characteristics: The computation time was in milliseconds.

Physical Appearance: These computers were enormous in size and required a large room for installation.

Application: They were used for scientific applications as they were the fastest computing device of their time.

Examples: UNIVersal Automatic Computer (UNIVAC), Electronic Numerical Integrator And Calculator (ENIAC), and Electronic Discrete Variable Automatic Computer (EDVAC).

The first generation computers used a large number of vacuum tubes and thus generated a lot of heat. They consumed a great deal of electricity and were expensive to operate. The machines were prone to frequent malfunctioning and required constant maintenance. Since first generation computers used machine language, they were difficult to program.

Further Details: Bulky, vacuum based and costly, used assembly language which was translated to machine level language for execution. These computers were used mainly for scientific calculations. Examples: ENIAC, EDSAC, EDVAC, UNIVAC.

 

Second Generation (1956 to 1963): Using Transistors

Hardware Technology: Transistors (Figure) replaced the vacuum tubes of the first generation of computers. Transistors allowed computers to become smaller, faster, cheaper, energy efficient and reliable. The second generation computers used magnetic core technology for primary memory. They used magnetic tapes and magnetic disks for secondary storage. The input was still through punched cards and the output using printouts. They used the concept of a stored program, where instructions were stored in the memory of computer.

Figure: Transistors

Software Technology: The instructions were written using the assembly language. Assembly language uses mnemonics like ADD for addition and SUB for subtraction for coding of the instructions. It is easier to write instructions in assembly language, as compared to writing instructions in machine language. High-level programming languages, such as early versions of COBOL and FORTRAN were also developed during this period.

Computing Characteristics: The computation time was in microseconds.

Physical Appearance: Transistors are smaller in size compared to vacuum tubes, thus, the size of the computer was also reduced.

Application: The cost of commercial production of these computers was very high, though less than the first generation computers. The transistors had to be assembled manually in second generation computers.

Examples: PDP-8, IBM 1401 and CDC 1604.

Second generation computers generated a lot of heat but much less than the first generation computers. They required less maintenance than the first generation computers.

Further Details: Smaller than vacuum based computers, but better performance-wise, used transistors instead of vacuum tubes. High level languages such as FORTRAN and COBOL were used. Punched cards continued to be used during this period. Computers, then, were used increasingly in business, industry and commercial organizations. Examples: IBM 7030, Honeywell 400.  

 

Third Generation (1964 to 1971): Using Integrated Circuits

Hardware Technology: The third generation computers used the Integrated Circuit (IC) chips. Figure 1.6 shows IC chips. In an IC chip, multiple transistors are placed on a silicon chip. Silicon is a type of semiconductor. The use of IC chip increased the speed and the efficiency of computer, manifold. The keyboard and monitor were used to interact with the third generation computer, instead of the punched card and printouts.

Figure: IC chips

Software Technology: The keyboard and the monitor were interfaced through the operating system. Operating system allowed different applications to run at the same time. High-level languages were used extensively for programming, instead of machine language and assembly language.

Computing Characteristics: The computation time was in nanoseconds.

Physical Appearance: The size of these computers was quite small compared to the second generation computers.

Application:  Computers became accessible to mass audience. Computers were  produced commercially, and were smaller and cheaper than their predecessors.

Examples: IBM 370, PDP 11.

The third generation computers used less power and generated less heat than the second generation computers. The cost of the computer reduced significantly, as individual components of the computer were not required to be assembled manually. The maintenance cost of the computers was also less compared to their predecessors.

Further Details: Small Scale Integration and Medium Scale Integration technology were implemented in CPU, I/O processors etc. Faster processors with magnetic core memories that were later replaced by RAM and ROM. This is when microprogramming was introduced as were operating system software. Database management, multi-user application, online systems like closed loop process control, airline reservation, interactive query systems, automatic industrial control, etc. emerged during this period. Examples: System 360 Mainframe from IBM, PDP-8 Mini Computer from Digital Equipment Corporation.

 

Fourth Generation (1971 to present): Using Microprocessors

Hardware Technology: They use the Large Scale Integration (LSI) and the Very Large Scale Integration (VLSI) technology. Thousands of transistors are integrated on a small silicon chip using LSI technology. VLSI allows hundreds of thousands of components to be integrated in a small chip. This era is marked by the development of microprocessor.

Microprocessor is a chip containing millions of transistors and components, and, designed using LSI and VLSI technology. A microprocessor chip is shown in Figure .

This generation of computers gave rise to Personal Computer (PC). Semiconductor memory replaced the earlier magnetic core memory, resulting in fast random access to memory. Secondary storage device like magnetic disks became smaller in physical size and larger in capacity. The linking of computers is another key development of this era.

The computers were linked to form networks that led to the emergence of the Internet. This generation also saw the development of pointing devices like mouse, and handheld devices.

Figure:  Microprocessors

Software Technology: Several new operating systems like the MS-DOS and MS Windows developed during this time. This generation of computers supported Graphical User Interface (GUI). GUI is a user-friendly interface that allows user to interact with the computer via menus and icons. High-level programming languages are used for the writing of programs.

Computing Characteristics: The computation time is in picoseconds.

Physical Appearance: They are smaller than the computers of the previous generation. Some can even fit into the palm of the hand.

Application: They became widely available for commercial purposes. Personal computers became available to the home user.

Examples: The Intel 4004 chip was the first microprocessor. The components of the computer like Central Processing Unit (CPU) and memory were located on a single chip.

In 1981, IBM introduced the first computer for home use. In 1984, Apple introduced the Macintosh.

The microprocessor has resulted in the fourth generation computers being smaller and cheaper than their predecessors. The fourth generation computers are also portable and more reliable.

They generate much lesser heat and require less maintenance compared to their predecessors.

GUI and pointing devices facilitate easy use and learning on the computer. Networking has resulted in resource sharing and communication among different computers.

Further Details: Microprocessors were introduced where complete processors and large section of main memory could be implemented in a single chip. CRT screen, laser & ink jet printers, scanners etc. were developed and so were LAN and WANS. C and UNIX were used. Examples: Intel’s 8088,80286,80386,80486 .., Motorola’s 68000, 68030, 68040, Apple II, CRAY I/2/X/MP etc.

 

Fifth Generation (Present and Next): Using Artificial Intelligence

The goal of fifth generation computing is to develop computers that are capable of learning and self-organization. The fifth generation computers use Super Large Scale Integrated (SLSI) chips that are able to store millions of components on a single chip. These computers have large memory requirements.

This generation of computers uses parallel processing that allows several instructions to be executed in parallel, instead of serial execution. Parallel processing results in faster processing speed. The Intel dual core microprocessor uses parallel processing.

The fifth generation computers are based on Artificial Intelligence (AI). They try to simulate the human way of thinking and reasoning. Artificial Intelligence includes areas like Expert System (ES), Natural Language Processing (NLP), speech recognition, voice recognition, robotics, etc.

Further Details: Computers use extensive parallel processing, multiple pipelines, multiple processors etc. Portable notebook computers introduced. They also started using object oriented languages such as JAVA. Quantum mechanism and nanotechnology available here will radically change computers for all times.

Examples: IBM notebooks, Pentium PCs-Pentium 1/2/3/4/Dual core/Quad core. SUN work stations, Origin 2000, PARAM 10000, IBM SP/2.

 

Table: Generation of Computer hardware


Figures Table: All 5 generation computers images for comparison




1st Generation Computer Image



2
nd Generation Computer Image









3rd Generation Computer Image





4th Generation Computer Image











5th Generation Computer Image









Post activity:

In this post we covered profiles of computer- generations in detail. If more detailed information is needed please browse or search the internet for above terms. All images are from Google search.

Keywords: Computer, Generations of computer.

…till next post, bye-bye and take care.

For table of content click here

 

 

 

 

Thursday, November 25, 2021

Brief History and Evolution of Computers-3

Generations or historical overview of the computer

A complete history of computing would include a multitude of diverse devices such as the ancient Chinese abacus, the Jacquard loom (1805) and Charles Babbage’s “analytical engine” (1834). It would also include discussion of mechanical, analog and digital computing architectures. As late as the 1960s, mechanical devices, such as the Marchant calculator, still found widespread application in science and engineering. During the early days of electronic computing devices, there was much discussion about the relative merits of analog vs. digital computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems of finite difference equations arising in oil reservoir modeling. In the end, digital computing devices proved to have the power, economics and scalability necessary to deal with large scale computations. Digital computers now dominate the computing world in all areas ranging from the hand calculator to the supercomputer and are pervasive throughout society. Therefore, this brief sketch of the development of scientific computing is limited to the area of digital, electronic computers.

The evolution of digital computing is often divided into generations. Each generation is characterized by dramatic improvements over the previous generation in the technology used to build computers, the internal organization of computer systems, and programming languages. Although not usually associated with computer generations, there has been a steady improvement in algorithms, including algorithms used in computational science. The following history has been organized using these widely recognized generations as mileposts.

3.1 First Generation Electronic Computers (1937 – 1953)

Three machines have been promoted at various times as the first electronic computers. These machines used electronic switches, in form of vacuum tubes, instead of electromechanical relays. In principle the electronic switches were more reliable, since they would have no moving parts that would wear out, but technology was still new at that time and the tubes were comparable to relays in reliability. Electronic components had one major benefit, however: they could “open” and “close” about 1,000 times faster than mechanical switches.

The earliest attempt to build an electronic computer was by J. V. Atanasoff, a professor of physics and mathematics at Iowa State, in 1937. Atanasoff set out to build a machine that would help his graduate students solve systems of partial differential equations. By 1941, he and graduate student Clifford Berry had succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. However, the machine was not programmable, and was more of an electronic calculator.

A second early electronic machine was Colossus, designed by Alan Turning for the British military in 1943. This machine played an important role in breaking codes used by the German army in World War II. Turning’s main contribution to the field of computer science was the idea of the Turing Machine, a mathematical formalism widely used in the study of computable functions. The existence of Colossus was kept secret until long after the war ended, and the credit due to Turning and his colleagues for designing one of the first working electronic computers was slow in coming.

The first general purpose programmable electronic computer was the Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of Pennysylvania. Work began in 1943, funded by the Army Ordinance Department, which needed a way to compute ballistics during World War II. The machine wasn’t completed until 1945, but then it was used extensively for calculations during the design of the hydrogen bomb. By the time it was decommissioned in 1955 it had been used for research on the design of wind tunnels, random number generators, and weather prediction. Eckert, Mauchly, and John Von Neumann, a consultant to the ENIAC project, began work on a new machine before ENIAC was finished. The main contribution of EDVAC, their new project, was the notion of a stored program. There is some controversy over who deserves the credit for this idea, but no one knows how important the idea was to the future of general purpose computers.

ENIAC was controlled by a set of external switches and dials; to change the program required physically altering the settings on these controls. These controls also limited the speed of the internal electronic operations. Through the use of a memory that was large enough to hold both instructions and data, and using the program stored in memory to control the order of arithmetic operations, EDVAC was able to run orders of magnitude faster than ENIAC. By storing instructions in the same medium as data, designers could concentrate on improving the internal structure of the machine without worrying about matching it to the speed of an external control.

Regardless of who deserves the credit for the stored program idea, the EDVAC project is significant as an example of the power of interdisciplinary projects that characterize modern computational science. By recognizing that functions, in the form of a sequence of instructions for a computer, can be encoded as numbers, the EDVAC group knew the instructions could be stored in the computer’s memory a long with numerical data. The notion of using numbers to represent functions was a key step used by Goedel in his incompleteness theorem in 1937, work which Von Neumann, as a logician, was quite familiar with. Von Neumann’s background in logic, combined with Eckert and Mauchly’s electrical engineering skills, formed a very powerful interdisciplinary team.

Software technology during this period was very primitive. The first programs were written out in machine code, i.e. programmers directly wrote down the numbers that corresponded to the instructions they wanted to store in memory. By the 1950s programmers were using a symbolic notation, known as assembly language, then hand translating the symbolic notation into machine code. Later programs known as assemblers performed the translation task.

As primitive as they were, these first electronic machines were quite useful in applied science and engineering. Atanasoff estimated that it would take eight hours to solve a set of equations with eight unknowns using a Marchant calculator, and 381 hours to solve 29 equations for 29 unknowns. The Atanasoff-Berry computer was able to complete the task in under an hour. The first problem run on the ENIAC, a numerical simulation used in the design of the hydrogen bomb, required 20 seconds, as opposed to forty hours using mechanical calculators. Eckert and Mauchly later developed what was arguably the first commercially successful computer, the UNIVAC; in 1952, 45 minutes after the polls closed and with 7% of the vote counted, UNIVAC predicted Eisenhower would defeat Stevenson with 438 electoral votes (he ended up with 442).

3.2 Second Generation (1954 – 1962)

The second generation saw several important developments at all levels of computer system design, from the technology used to build the basic circuits to the programming languages used to write scientific applications.

Electronic switches in this era were based on discrete diode and transistor technology with a switching time of approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at Bell Laboratories in 1954 and TX-0 at MIT’s Lincoln Laboratory. Memory technology was based on magnetic cores which could be accessed in random order, as opposed to mercury delay lines, in which data was stored as an acoustic wave that passed sequentially through the medium and could be accessed only when the data moved by the I/O interface.

Important innovations in computer architecture included index registers for controlling loops and floating point units for calculations based on real numbers. Prior to this accessing successive elements in an array was quite tedious and often involved writing self-modifying code (programs which modified themselves as they ran; at the time viewed as a powerful application of the principle that programs and data were fundamentally the same, this practice is now frowned upon as extremely hard to debug and is impossible in most high level languages). Floating point operations were performed by libraries of software routines in early computers, but were done in hardware in second generation machines.

During this second generation many high level programming languages were introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704 and 7094. The latter introduced I/O processors for better throughput between I/O devices and main memory.

The second generation also saw the first two supercomputers designed specifically for numeric processing in scientific applications. The term “supercomputer” is generally reserved for a machine that is an order of magnitude more powerful than other machines of its era. Two machines of the 1950s deserve this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early examples of machines that overlapped memory operations with processor operations and had primitive forms of parallel processing.

3.3 Third Generation (1963 – 1972)

The third generation brought huge gains in computational power. Innovations in this era include the use of integrated circuits, or ICs (semiconductor devices with several transistors built into one physical component), semiconductor memories starting to be used instead of magnetic cores, microprogramming as a technique for efficiently designing complex processors, the coming of age of pipelining and other forms of parallel processing, and the introduction of operating systems and time-sharing.

The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit (or “chip”), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered printed circuits were developed and core memory was replaced by faster, solid state memories. Computer designers began to take advantage of parallelism by using multiple functional units, overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and the data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use functional parallelism. By using 10 separate functional units that could operate simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a computation rate of 1 million floating point operations per second (1 Mflops). Five years later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its pipelined functional units, is considered to be the first vector processor and was capable of executing at 10 Mflops. The IBM 360/91, released during the same period, was roughly twice as fast as the CDC 660. It employed instruction look ahead, separate floating point and integer functional units and pipelined instruction stream. The IBM 360-195 was comparable to the CDC 7600, deriving much of its performance from a very fast cache memory. The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC IV, jointly developed by Burroughs, the Department of Defense and the University of Illinois, was representative of the first parallel computers. The Texas Instrument Advanced Scientific Computer (T I-ASC) and the STAR- 100 of CDC were pipelined vector processors that demonstrated the viability of that design and set the standards for subsequent vector processors.

Early in this, third generation Cambridge and the University of London cooperated in the development of CPL (Combined Programming Language, 1963). CPL was, according to its authors, an attempt to capture only the important features of the complicated and sophisticated ALGOL. However, the ALGOL, CPL was large with many features that were hard to learn.

In an attempt at further simplification, Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967).

3.4 Fourth Generation (1972 – 1984)

The next generation of computer systems saw the use of large scale integration (LSI – 1000 devices per chip) and very large scale integration (VLSI – 100,000 devices per chip) in the construction of computing elements. At this scale entire processors will fit onto a single chip, and for simple systems the entire computer (processor, main memory, and I/O controllers) can fit on one chip. Gate delays dropped to about Ins per gate.

Semiconductor memories replaced core memories as the main memory in most systems; until this time the use of semiconductor memory in most systems was limited to registers and cache.

During this period, high speed vector processors, such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high performance computing scene. 

Computers with large main memory, such as the CRAY 2, began to emerge. A variety of parallel architectures began to appear; however, during this period the parallel computing efforts were of a mostly experimental nature and most computational science was carried out on vector processors. Microcomputers and workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.

Developments in software include very high level languages such as FP (functional programming) and Prolog (programming in logic). These languages tend to use a declarative programming style as opposed to the imperative style of Pascal, C. FORTRAN, et al. In a declarative style, a programmer gives a mathematical specification of what should be computed, leaving many details of how it should be computed to the compiler and/or runtime system. These languages are not yet in wide use, but are very promising as notations for programs that will run on massively parallel computers (systems with over 1,000 processors).

Compilers for established languages started to use sophisticated optimization techniques to improve code, and compilers for vector processors were able to vectorize simple loops (turn loops into single instructions that would initiate an operation over an entire vector).

Two important events marked the early part of the third generation: the development of the C programming language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the design goals of CPL and generalize Thompson’s B, developed the C language. Thompson and Ritchie then used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different computers, relieving users from having to learn a new operating system each time they change computer hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.

An important event in the development of computational science was the publication of the Lax report. In 1982, the US Department of Defense (DOD) and National Science Foundation (NSF) sponsored a panel on Large Scale Computing in Science and Engineering, chaired by Peter D. Lax. The Lax Report stated that aggressive and focused foreign initiatives in high performance computing, especially in Japan, were in sharp contrast to the absence of coordinated national attention in the United States. The report noted that university researchers had inadequate access to high performance computers. One of the first and most visible of the responses to the Lax report was the establishment of the NSF supercomputing centers. Phase I on this NSF program was designed to encourage the use of high performance computing at American universities by making cycles and training on three (and later six) existing supercomputers immediately available. Following this Phase I stage, in 1984 – 1985 NSF provided funding for the establishment of five Phase II supercomputing centers.

The Phase II centers, located in San Diego (San Diego supercomputing Centre); Illinois (National Center for Supercomputing Applications); Pittsburgh (Pittsburgh Supercomputing Center); Cornell (Cornell Theory Center); and Princeton (John Von Neumann Center), have been extremely successful at providing computing time on supercomputers to the academic community. In addition they have provided many valuable training programs and have developed several software packages that are available free of charge. These Phase II centers continue to augment the substantial high performance computing efforts at the National Laboratories, especially the Department of Energy (DOE) and NASA sites.

3.5 Fifth Generation (1984 – 1990)

The development of the next generation of computer systems is characterized mainly by the acceptance of parallel processing. Until this time, parallelism was limited to pipelining and vector processing, or at most to a few processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of processors that could all be working on different parts of a single program. The scale of integration in semiconductors continued at an incredible pace, by 1990 it was possible to build chips with a million components – and semiconductor memories became standard on all computers.

Other new developments were the widespread use of computer networks and the increasing use of single-user workstations. Prior to 1985, large scale parallel processing was viewed as a research goal, but two systems introduced around this time are typical of the first commercial products to be based on parallel processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module (but each processor had its own local cache).

The machine was designed to compete with the DEC VAX-780 as a general purpose Unix system, with each processor working on a different user’s job. However, Sequent provided a library of subroutines that would allow programmers to write programs that would use more than one processor, and the machine was widely used to explore parallel algorithms and programming techniques.

The Intel iPSC -1, nicknamed “the hypercube”, took a different approach. Instead of using one memory module, Intel connected each processor to its own memory and used a network interface to connect processors. This distributed memory architecture meant memory was no longer a bottleneck and large systems (using more processors) could be built. The largest iPSC-1 had 128 processors. Toward the end of this period, a third type of parallel processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there are several thousand very simple processors. All processors work under the direction of a single control unit; i.e. if the control unit says “add a to b” then all processors find their local copy of a and add it to their local copy of

b. Machines in this class include the Connection Machine from Thinking Machines, Inc., and the MP-1 from MasPar, Inc. Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector processors introduced parallel models, but there were very few (two to eight) processors in these parallel machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN) technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing environment towards a distributed computing environment in which each user has their own workstation for relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources such as file servers and supercomputers. RISC technology (a style of internal organization of the CPU) and plummeting costs for RAM brought tremendous gains in computational power of relatively low cost workstations and servers. This period also saw a marked increase in both the quality and quantity of scientific visualization.

3.6 Sixth Generation (1990 to date)

Transitions between generations in computer technology are hard to define, especially as they are taking place. Some changes, such as the switch from vacuum tubes to transistors, are immediately apparent as fundamental changes, but others are clear only in retrospect. Many of the developments in computer systems since 1990 reflect gradual improvements over established systems, and thus it is hard to claim they represent a transition to a new “generation”, but other developments will prove to be significant changes.

In this section, we offer some assessments about recent developments and current trends that we think will have a significant impact on computational science.

This generation is beginning with many gains in parallel computing, both in the hardware area and in improved understanding of how to develop algorithms to exploit diverse, massively parallel architectures. Parallel systems now compete with vector processors in terms of total computing power and most especially parallel systems to dominate the future.

Combinations of parallel/vector architectures are well established, and one corporation (Fujitsu) has announced plans to build a system with over 200 of its high and vector processors.

Manufacturers have set themselves the goal of achieving teraflops (1012 arithmetic operations per second) performance by the middle of the decade, and it is clear this will be obtained only by a system with a thousand processors or more. Workstation technology has continued to improve, with processor designs now using a combination of RISC, pipelining, and parallel processing. As a result it is now possible to procure a desktop workstation that has the same overall computing power (100 megaflops) as fourth generation supercomputers. This development has sparked an interest in heterogeneous computing: a program started on one workstation can find idle workstations elsewhere in the local network to run parallel subtasks.

One of the most dramatic changes in the sixth generation is the explosive growth of wide area networking. Network bandwidth has expanded tremendously in the last few years and will continue to improve for the next several years. T1 transmission rates are now standard for regional networks, and the national “backbone” that interconnects regional networks uses T3.

Networking technology is becoming more widespread than its original strong base in universities and government laboratories as it is rapidly finding application in K-12 education, community networks and private industry. A little over a decade after the warning voiced in the Lax report, the future of a strong computational science infrastructure is bright.

The human drive to learn required innovations in equipments. Past inventions made future innovations possible innovations, from graphics capabilities to parallel processing, have filtered down from the supercomputers to the mainframes. We can foresee the future of small computers by watching the developments in the larger machines. Various renovations along with important points (at a glance) are given in the Table, which is as follows:

 

Post activity:

In this post we covered historical over view of computer in detail without any images. If more detailed information is needed please browse or search the internet for above terms.

Keywords: Computer, Generations of computer.

…till next post, bye-bye and take care.

For table of content click here