Operating systems have been evolving through the years. In this excerpt from his book, Modern Operating Systems, Andrew Tanenbaum briefly looks at a few of the highlights. Since operating systems have historically been closely tied to the architecture of the computers on which they run, Dr. Tanenbaum looks at successive generations of computers to see what their operating systems were like. This mapping of operating system generations to computer generations is crude, but it does provide some structure where there would otherwise be none.
See all of Andy's articles here.
This chapter is from the book
This chapter is from the book
Modern Operating Systems, 2nd Edition
Learn More Buy
This chapter is from the book
This chapter is from the book
Modern Operating Systems, 2nd Edition
Learn More Buy
The first true digital computer was designed by the Englishmathematician Charles Babbage (17921871). Although Babbage spent most ofhis life and fortune trying to build his ''analyticalengine,'' he never got it working properly because it was purelymechanical, and the technology of his day could not produce the required wheels,gears, and cogs to the high precision that he needed. Needless to say, theanalytical engine did not have an operating system.
As an interesting historical aside, Babbage realized that he wouldneed software for his analytical engine, so he hired a young woman named AdaLovelace, who was the daughter of the famed British poet Lord Byron, as theworld's first programmer. The programming language Ada® isnamed after her.
1.2.1 The First Generation (194555) Vacuum Tubes and Plugboards
After Babbage's unsuccessful efforts, little progress was madein constructing digital computers until World War II. Around the mid-1940s,Howard Aiken at Harvard, John von Neumann at the Institute for Advanced Study inPrinceton, J. Presper Eckert and William Mauchley at the University ofPennsylvania, and Konrad Zuse in Germany, among others, all succeeded inbuilding calculating engines. The first ones used mechanical relays but werevery slow, with cycle times measured in seconds. Relays were later replaced byvacuum tubes. These machines were enormous, filling up entire rooms with tens ofthousands of vacuum tubes, but they were still millions of times slower thaneven the cheapest personal computers available today.
In these early days, a single group of people designed, built,programmed, operated, and maintained each machine. All programming was done inabsolute machine language, often by wiring up plugboards to control themachine's basic functions. Programming languages were unknown (evenassembly language was unknown). Operating systems were unheard of. The usualmode of operation was for the programmer to sign up for a block of time on thesignup sheet on the wall, then come down to the machine room, insert his or herplugboard into the computer, and spend the next few hours hoping that none ofthe 20,000 or so vacuum tubes would burn out during the run. Virtually all theproblems were straightforward numerical calculations, such as grinding outtables of sines, cosines, and logarithms.
By the early 1950s, the routine had improved somewhat with theintroduction of punched cards. It was now possible to write programs on cardsand read them in instead of using plugboards; otherwise, the procedure was thesame.
1.2.2 The Second Generation (195565) Transistors and Batch Systems
The introduction of the transistor in the mid-1950s changed thepicture radically. Computers became reliable enough that they could bemanufactured and sold to paying customers with the expectation that they wouldcontinue to function long enough to get some useful work done. For the firsttime, there was a clear separation between designers, builders, operators,programmers, and maintenance personnel.
These machines, now called mainframes, were locked away inspecially air conditioned computer rooms, with staffs of professional operatorsto run them. Only big corporations or major government agencies or universitiescould afford the multimillion dollar price tag. To run a job (i.e., aprogram or set of programs), a programmer would first write the program on paper(in FORTRAN or assembler), then punch it on cards. He would then bring the carddeck down to the input room and hand it to one of the operators and go drinkcoffee until the output was ready.
When the computer finished whatever job it was currently running, anoperator would go over to the printer and tear off the output and carry it overto the output room, so that the programmer could collect it later. Then he wouldtake one of the card decks that had been brought from the input room and read itin. If the FORTRAN compiler was needed, the operator would have to get it from afile cabinet and read it in. Much computer time was wasted while operators werewalking around the machine room.
Given the high cost of the equipment, it is not surprising that people quickly looked for ways to reduce the wasted time. The solution generally adopted was the batch system. The idea behind it was to collect a tray full of jobs in the input room and then read them onto a magnetic tape using a small (relatively) inexpensive computer, such as the IBM 1401, which was very good at reading cards, copying tapes, and printing output, but not at all good at numerical calculations. Other, much more expensive machines, such as the IBM 7094, were used for the real computing. This situation is shown in Fig. 1-1.
After about an hour of collecting a batch of jobs, the tape wasrewound and brought into the machine room, where it was mounted on a tape drive.The operator then loaded a special program (the ancestor of today'soperating system), which read the first job from tape and ran it. The output waswritten onto a second tape, instead of being printed. After each job finished,the operating system automatically read the next job from the tape and beganrunning it. When the whole batch was done, the operator removed the input andoutput tapes, replaced the input tape with the next batch, and brought theoutput tape to a 1401 for printing off line (i.e., not connected to themain computer).
Figure 1-1 An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries output tape to 1401. (f) 1401 prints output.
The structure of a typical input job is shown in Fig. 1-2. It started out with a $JOB card, specifying the maximum run time in minutes, the account number to be charged, and the programmer's name. Then came a $FORTRAN card, telling the operating system to load the FORTRAN compiler from the system tape. It was followed by the program to be compiled, and then a $LOAD card, directing the operating system to load the object program just compiled. (Compiled programs were often written on scratch tapes and had to be loaded explicitly.) Next came the $RUN card, telling the operating system to run the program with the data following it. Finally, the $END card marked the end of the job. These primitive control cards were the forerunners of modern job control languages and command interpreters.
Large second-generation computers were used mostly for scientific andengineering calculations, such as solving the partial differential equationsthat often occur in physics and engineering. They were largely programmed inFOR-TRAN and assembly language. Typical operating systems were FMS (the FortranMonitor System) and IBSYS, IBM's operating system for the 7094.
Figure 1-2 Structure of a typical FMS job.
1.2.3 The Third Generation (19651980) ICs and Multiprogramming
By the early 1960s, most computer manufacturers had two distinct, andtotally incompatible, product lines. On the one hand there were theword-oriented, large-scale scientific computers, such as the 7094, which wereused for numerical calculations in science and engineering. On the other hand,there were the character-oriented, commercial computers, such as the 1401, whichwere widely used for tape sorting and printing by banks and insurance companies.
Developing and maintaining two completely different product lines wasan expensive proposition for the manufacturers. In addition, many new computercustomers initially needed a small machine but later outgrew it and wanted abigger machine that would run all their old programs, but faster.
IBM attempted to solve both of these problems at a single stroke byintroducing the System/360. The 360 was a series of software-compatible machinesranging from 1401-sized to much more powerful than the 7094. The machinesdiffered only in price and performance (maximum memory, processor speed, numberof I/O devices permitted, and so forth). Since all the machines had the samearchitecture and instruction set, programs written for one machine could run onall the others, at least in theory. Furthermore, the 360 was designed to handleboth scientific (i.e., numerical) and commercial computing. Thus a single familyof machines could satisfy the needs of all customers. In subsequent years, IBMhas come out with compatible successors to the 360 line, using more moderntechnology, known as the 370, 4300, 3080, and 3090 series.
The 360 was the first major computer line to use (small-scale)Integrated Circuits (ICs), thus providing a major price/performance advantageover the second-generation machines, which were built up from individualtransistors. It was an immediate success, and the idea of a family of compatiblecomputers was soon adopted by all the other major manufacturers. The descendantsof these machines are still in use at computer centers today. Nowadays they areoften used for managing huge databases (e.g., for airline reservation systems)or as servers for World Wide Web sites that must process thousands of requestsper second.
The greatest strength of the ''one family'' ideawas simultaneously its greatest weakness. The intention was that all software,including the operating system, OS/360 had to work on all models. It hadto run on small systems, which often just replaced 1401s for copying cards totape, and on very large systems, which often replaced 7094s for doing weatherforecasting and other heavy computing. It had to be good on systems with fewperipherals and on systems with many peripherals. It had to work in commercialenvironments and in scientific environments. Above all, it had to be efficientfor all of these different uses.
There was no way that IBM (or anybody else) could write a piece ofsoftware to meet all those conflicting requirements. The result was an enormousand extraordinarily complex operating system, probably two to three orders ofmagnitude larger than FMS. It consisted of millions of lines of assemblylanguage written by thousands of programmers, and contained thousands uponthousands of bugs, which necessitated a continuous stream of new releases in anattempt to correct them. Each new release fixed some bugs and introduced newones, so the number of bugs probably remained constant in time.
One of the designers of OS/360, Fred Brooks, subsequently wrote awitty and incisive book (Brooks, 1996) describing his experiences with OS/360.While it would be impossible to summarize the book here, suffice it to say thatthe cover shows a herd of prehistoric beasts stuck in a tar pit. The cover ofSilberschatz et al. (2000) makes a similar point about operating systems beingdinosaurs.
Despite its enormous size and problems, OS/360 and the similarthird-generation operating systems produced by other computer manufacturersactually satisfied most of their customers reasonably well. They alsopopularized several key techniques absent in second-generation operatingsystems. Probably the most important of these was multiprogramming. Onthe 7094, when the current job paused to wait for a tape or other I/O operationto complete, the CPU simply sat idle until the I/O finished. With heavilyCPU-bound scientific calculations, I/O is infrequent, so this wasted time is notsignificant. With commercial data processing, the I/O wait time can often be 80or 90 percent of the total time, so something had to be done to avoid having the(expensive) CPU be idle so much.
The solution that evolved was to partition memory into several pieces, with a different job in each partition, as shown in Fig. 1-3. While one job was waiting for I/O to complete, another job could be using the CPU. If enough jobs could be held in main memory at once, the CPU could be kept busy nearly 100 percent of the time. Having multiple jobs safely in memory at once requires special hardware to protect each job against snooping and mischief by the other ones, but the 360 and other third-generation systems were equipped with this hardware.
Another major feature present in third-generation operating systemswas the ability to read jobs from cards onto the disk as soon as they werebrought to the computer room. Then, whenever a running job finished, theoperating system could load a new job from the disk into the now-empty partitionand run it. This technique is called spooling (from SimultaneousPeripheral Operation On Line) and was also used for output. With spooling, the1401s were no longer needed, and much carrying of tapes disappeared.
Figure 1-3 A multiprogramming system with three jobs in memory.
Although third-generation operating systems were well suited for bigscientific calculations and massive commercial data processing runs, they werestill basically batch systems. Many programmers pined for the first-generationdays when they had the machine all to themselves for a few hours, so they coulddebug their programs quickly. With third-generation systems, the time betweensubmitting a job and getting back the output was often several hours, so asingle misplaced comma could cause a compilation to fail, and the programmer towaste half a day.
This desire for quick response time paved the way fortimesharing, a variant of multiprogramming, in which each user has anonline terminal. In a timesharing system, if 20 users are logged in and 17 ofthem are thinking or talking or drinking coffee, the CPU can be allocated inturn to the three jobs that want service. Since people debugging programsusually issue short commands (e.g., compile a five-page procedure†) ratherthan long ones (e.g., sort a million-record file), the computer can providefast, interactive service to a number of users and perhaps also work on bigbatch jobs in the background when the CPU is otherwise idle. The first serioustimesharing system, CTSS (Compatible Time Sharing System), wasdeveloped at M.I.T. on a specially modified 7094 (Corbato´ et al.,1962). However, timesharing did not really become popular until the necessaryprotection hardware became widespread during the third generation.
After the success of the CTSS system, MIT, Bell Labs, and GeneralElectric (then a major computer manufacturer) decided to embark on thedevelopment of a ''computer utility,'' a machine that wouldsupport hundreds of simultaneous timesharing users. Their model was theelectricity distribution systemwhen you need electric power, you juststick a plug in the wall, and within reason, as much power as you need will bethere. The designers of this system, known as MUL-TICS (MULTiplexedInformation and Computing Service), envisioned one huge machine providingcomputing power for everyone in the Boston area. The idea that machines far morepowerful than their GE-645 mainframe would be sold for a thousand dollars by themillions only 30 years later was pure science fiction. Sort of like the idea ofsupersonic trans-Atlantic undersea trains now.
†We will use the terms ''procedure,''''subroutine,'' and ''function''interchangeably in this book.
MULTICS was a mixed success. It was designed to support hundreds ofusers on a machine only slightly more powerful than an Intel 386-based PC,although it had much more I/O capacity. This is not quite as crazy as it sounds,since people knew how to write small, efficient programs in those days, a skillthat has subsequently been lost. There were many reasons that MULTICS did nottake over the world, not the least of which is that it was written in PL/I, andthe PL/I compiler was years late and barely worked at all when it finallyarrived. In addition, MUL-TICS was enormously ambitious for its time, much likeCharles Babbage's analytical engine in the nineteenth century.
To make a long story short, MULTICS introduced many seminal ideasinto the computer literature, but turning it into a serious product and a majorcommercial success was a lot harder than anyone had expected. Bell Labs droppedout of the project, and General Electric quit the computer business altogether.However, M.I.T. persisted and eventually got MULTICS working. It was ultimatelysold as a commercial product by the company that bought GE's computerbusiness (Honeywell) and installed by about 80 major companies and universitiesworldwide. While their numbers were small, MULTICS users were fiercely loyal.General Motors, Ford, and the U.S. National Security Agency, for example, onlyshut down their MULTICS systems in the late 1990s, 30 years after MULTICS wasreleased.
For the moment, the concept of a computer utility has fizzled out butit may well come back in the form of massive centralized Internet servers towhich relatively dumb user machines are attached, with most of the workhappening on the big servers. The motivation here is likely to be that mostpeople do not want to administrate an increasingly complex and finicky computersystem and would prefer to have that work done by a team of professionalsworking for the company running the server. E-commerce is already evolving inthis direction, with various companies running e-malls on multiprocessor serversto which simple client machines connect, very much in the spirit of the MULTICSdesign.
Despite its lack of commercial success, MULTICS had a huge influenceon subsequent operating systems.It is described in (Corbato et al.,1972; Corbato and Vyssotsky, 1965; Daley and Dennis, 1968; Organick,1972; and Saltzer, 1974). It also has a still-active Web site,http://www.multicians.org,with a great deal of information about the system, its designers, and its users.
Another major development during the third generation was thephenomenal growth of minicomputers, starting with the DEC PDP-1 in 1961. ThePDP-1 had only 4K of 18-bit words, but at $120,000 per machine (less than 5percent of the price of a 7094), it sold like hotcakes. For certain kinds ofnonnumerical work, it was almost as fast as the 7094 and gave birth to a wholenew industry. It was quickly followed by a series of other PDPs (unlikeIBM's family, all incompatible) culminating in the PDP-11.
One of the computer scientists at Bell Labs who had worked on theMULTICS project, Ken Thompson, subsequently found a small PDP-7 minicomputerthat no one was using and set out to write a stripped-down, one-user version ofMULTICS. This work later developed into the UNIX®operating system, which became popular in the academic world, withgovernment agencies, and with many companies.
The history of UNIX has been told elsewhere (e.g., Salus, 1994). Partof that story will be given in Chap. 10. For now, suffice it to say, thatbecause the source code was widely available, various organizations developedtheir own (incompatible) versions, which led to chaos. Two major versionsdeveloped, System V, from AT&T, and BSD, (Berkeley SoftwareDistribution) from the University of California at Berkeley. These had minorvariants as well. To make it possible to write programs that could run on anyUNIX system, IEEE developed a standard for UNIX, called POSIX, that mostversions of UNIX now support. POSIX defines a minimal system call interface thatconformant UNIX systems must support. In fact, some other operating systems nowalso support the POSIX interface.
As an aside, it is worth mentioning that in 1987, the author releaseda small clone of UNIX, called MINIX, for educational purposes.Functionally, MINIX is very similar to UNIX, including POSIX support. A bookdescribing its internal operation and listing the source code in an appendix isalso available (Tanenbaum and Woodhull, 1997). MINIX is available for free(including all the source code) over the Internet at URLhttp://www.cs.vu.nl/~ast/minix.html.
The desire for a free production (as opposed to educational) versionof MINIX led a Finnish student, Linus Torvalds, to write Linux. Thissystem was developed on MINIX and originally supported various MINIX features(e.g., the MINIX file system). It has since been extended in many ways but stillretains a large amount of underlying structure common to MINIX, and to UNIX(upon which the former was based). Most of what will be said about UNIX in thisbook thus applies to System V, BSD, MINIX, Linux, and other versions and clonesof UNIX as well.
1.2.4 The Fourth Generation (1980Present) Personal Computers
With the development of LSI (Large Scale Integration) circuits, chipscontaining thousands of transistors on a square centimeter of silicon, the ageof the personal computer dawned. In terms of architecture, personal computers(initially called microcomputers) were not all that different fromminicomputers of the PDP-11 class, but in terms of price they certainly weredifferent. Where the minicomputer made it possible for a department in a companyor university to have its own computer, the microprocessor chip made it possiblefor a single individual to have his or her own personal computer.
In 1974, when Intel came out with the 8080, the first general-purpose8-bit CPU, it wanted an operating system for the 8080, in part to be able totest it. Intel asked one of its consultants, Gary Kildall, to write one. Kildalland a friend first built a controller for the newly-released Shugart Associates8-inch floppy disk and hooked the floppy disk up to the 8080, thus producing thefirst microcomputer with a disk. Kildall then wrote a disk-based operatingsystem called CP/M (Control Program for Microcomputers) for it.Since Intel did not think that disk-based microcomputers had much of a future,when Kildall asked for the rights to CP/M, Intel granted his request. Kildallthen formed a company, Digital Research, to further develop and sell CP/M.
In 1977, Digital Research rewrote CP/M to make it suitable forrunning on the many microcomputers using the 8080, Zilog Z80, and other CPUchips. Many application programs were written to run on CP/M, allowing it tocompletely dominate the world of microcomputing for about 5 years.
In the early 1980s, IBM designed the IBM PC and looked around forsoftware to run on it. People from IBM contacted Bill Gates to license his BASICinterpreter. They also asked him if he knew of an operating system to run on thePC. Gates suggested that IBM contact Digital Research, then the world'sdominant operating systems company. Making what was surely the worst businessdecision in recorded history, Kildall refused to meet with IBM, sending asubordinate instead. To make matters worse, his lawyer even refused to signIBM's nondisclosure agreement covering the not-yet-announced PC.Consequently, IBM went back to Gates asking if he could provide them with anoperating system.
When IBM came back, Gates realized that a local computermanufacturer, Seattle Computer Products, had a suitable operating system, DOS(Disk Operating System). He approached them and asked to buy it(allegedly for $50,000), which they readily accepted. Gates then offered IBM aDOS/BASIC package, which IBM accepted. IBM wanted certain modifications, soGates hired the person who wrote DOS, Tim Paterson, as an employee ofGates' fledgling company, Microsoft, to make them. The revised system wasrenamed MS-DOS (MicroSoft Disk Operating System) and quickly cameto dominate the IBM PC market. A key factor here was Gates' (in retrospect,extremely wise) decision to sell MS-DOS to computer companies for bundling withtheir hardware, compared to Kildall's attempt to sell CP/M to end users oneat a time (at least initially).
By the time the IBM PC/AT came out in 1983 with the Intel 80286 CPU,MS-DOS was firmly entrenched and CP/M was on its last legs. MS-DOS was laterwidely used on the 80386 and 80486. Although the initial version of MS-DOS wasfairly primitive, subsequent versions included more advanced features, includingmany taken from UNIX. (Microsoft was well aware of UNIX, even selling amicrocomputer version of it called XENIX during the company's early years.)CP/M, MS-DOS, and other operating systems for early microcomputers were allbased on users typing in commands from the keyboard. That eventually changed dueto research done by Doug Engelbart at Stanford Research Institute in the 1960s.Engelbart invented the GUI (Graphical User Interface), pronounced''gooey,'' complete with windows, icons, menus, and mouse.These ideas were adopted by researchers at Xerox PARC and incorporated intomachines they built.
One day, Steve Jobs, who co-invented the Apple computer in hisgarage, visited PARC, saw a GUI, and instantly realized its potential value,something Xerox management famously did not (Smith and Alexander, 1988). Jobsthen embarked on building an Apple with a GUI. This project led to the Lisa,which was too expensive and failed commercially. Jobs' second attempt, theApple Macintosh, was a huge success, not only because it was much cheaper thanthe Lisa, but also because it was user friendly, meaning that it wasintended for users who not only knew nothing about computers but furthermore hadabsolutely no intention whatsoever of learning.
When Microsoft decided to build a successor to MS-DOS, it wasstrongly influenced by the success of the Macintosh. It produced a GUI-basedsystem called Windows, which originally ran on top of MS-DOS (i.e., it was morelike a shell than a true operating system). For about 10 years, from 1985 to1995, Windows was just a graphical environment on top of MS-DOS. However,starting in 1995 a freestanding version of Windows, Windows 95, was releasedthat incorporated many operating system features into it, using the underlyingMS-DOS system only for booting and running old MS-DOS programs. In 1998, aslightly modified version of this system, called Windows 98 was released.Nevertheless, both Windows 95 and Windows 98 still contain a large amount of16-bit Intel assembly language.
Another Microsoft operating system is Windows NT (NT standsfor New Technology), which is compatible with Windows 95 at a certain level, buta complete rewrite from scratch internally. It is a full 32-bit system. The leaddesigner for Windows NT was David Cutler, who was also one of the designers ofthe VAX VMS operating system, so some ideas from VMS are present in NT.Microsoft expected that the first version of NT would kill off MS-DOS and allother versions of Windows since it was a vastly superior system, but it fizzled.Only with Windows NT 4.0 did it finally catch on in a big way, especially oncorporate networks. Version 5 of Windows NT was renamed Windows 2000 in early1999. It was intended to be the successor to both Windows 98 and Windows NT 4.0.That did not quite work out either, so Microsoft came out with yet anotherversion of Windows 98 called Windows Me (Millennium edition).
The other major contender in the personal computer world is UNIX (andits various derivatives). UNIX is strongest on workstations and other high-endcomputers, such as network servers. It is especially popular on machines poweredby high-performance RISC chips. On Pentium-based computers, Linux is becoming apopular alternative to Windows for students and increasingly many corporateusers. (As an aside, throughout this book we will use the term''Pentium'' to mean the Pentium I, II, III, and 4.) Althoughmany UNIX users, especially experienced programmers, prefer a command-basedinterface to a GUI, nearly all UNIX systems support a windowing system calledthe X Windows system produced at M.I.T. This system handles the basicwindow management, allowing users to create, delete, move, and resize windowsusing a mouse. Often a complete GUI, such as Motif, is available to runon top of the X Windows system giving UNIX a look and feel something like theMacintosh or Microsoft Windows, for those UNIX users who want such a thing.
An interesting development that began taking place during themid-1980s is the growth of networks of personal computers running networkoperating systems and distributed operating systems (Tanenbaum andVan Steen, 2002). In a network operating system, the users are aware of theexistence of multiple computers and can log in to remote machines and copy filesfrom one machine to another. Each machine runs its own local operating systemand has its own local user (or users).
Network operating systems are not fundamentally different fromsingle-processor operating systems. They obviously need a network interfacecontroller and some low-level software to drive it, as well as programs toachieve remote login and remote file access, but these additions do not changethe essential structure of the operating system.
A distributed operating system, in contrast, is one that appears toits users as a traditional uniprocessor system, even though it is actuallycomposed of multiple processors. The users should not be aware of where theirprograms are being run or where their files are located; that should all behandled automatically and efficiently by the operating system.
True distributed operating systems require more than just adding alittle code to a uniprocessor operating system, because distributed andcentralized systems differ in critical ways. Distributed systems, for example,often allow applications to run on several processors at the same time, thusrequiring more complex processor scheduling algorithms in order to optimize theamount of parallelism.
Communication delays within the network often mean that these (andother) algorithms must run with incomplete, outdated, or even incorrectinformation. This situation is radically different from a single-processorsystem in which the operating system has complete information about the systemstate.
1.2.5 Ontogeny Recapitulates Phylogeny
After Charles Darwin's book The Origin of the Species waspublished, the German zoologist Ernst Haeckel stated that ''OntogenyRecapitulates Phylogeny.'' By this he meant that the development of anembryo (ontogeny) repeats (i.e., recapitulates) the evolution of the species(phylogeny). In other words, after fertilization, a human egg goes throughstages of being a fish, a pig, and so on before turning into a human baby.Modern biologists regard this as a gross simplification, but it still has akernel of truth in it.
Something analogous has happened in the computer industry. Each newspecies (mainframe, minicomputer, personal computer, embedded computer, smartcard, etc.) seems to go through the development that its ancestors did. Thefirst mainframes were programmed entirely in assembly language. Even complexprograms, like compilers and operating systems, were written in assembler. Bythe time minicomputers appeared on the scene, FORTRAN, COBOL, and otherhigh-level languages were common on mainframes, but the new minicomputers werenevertheless programmed in assembler (for lack of memory). When microcomputers(early personal computers) were invented, they, too, were programmed inassembler, even though by then minicomputers were also programmed in high-levellanguages. Palmtop computers also started with assembly code but quickly movedon to high-level languages (mostly because the development work was done onbigger machines). The same is true for smart cards.
Now let us look at operating systems. The first mainframes initiallyhad no protection hardware and no support for multiprogramming, so they ransimple operating systems that handled one manually-loaded program at a time.Later they acquired the hardware and operating system support to handle multipleprograms at once, and then full timesharing capabilities.
When minicomputers first appeared, they also had no protectionhardware and ran one manually-loaded program at a time, even thoughmultiprogramming was well established in the mainframe world by then. Gradually,they acquired protection hardware and the ability to run two or more programs atonce. The first microcomputers were also capable of running only one program ata time, but later acquired the ability to multiprogram. Palmtops and smart cardswent the same route.
Disks first appeared on large mainframes, then on minicomputers,microcomputers, and so on down the line. Even now, smart cards do not have harddisks, but with the advent of flash ROM, they will soon have the equivalent ofit. When disks first appeared, primitive file systems sprung up. On the CDC6600, easily the most powerful mainframe in the world during much of the 1960s,the file system consisted of users having the ability to create a file and thendeclare it to be permanent, meaning it stayed on the disk even after thecreating program exited. To access such a file later, a program had to attach itwith a special command and give its password (supplied when the file was madepermanent). In effect, there was a single directory shared by all users. It wasup to the users to avoid file name conflicts. Early minicomputer file systemshad a single directory shared by all users and so did early microcomputer filesystems.
Virtual memory (the ability to run programs larger than the physicalmemory) had a similar development. It first appeared in mainframes,minicomputers, microcomputers and gradually worked its way down to smaller andsmaller systems. Networking had a similar history.
In all cases, the software development was dictated by thetechnology. The first microcomputers, for example, had something like 4 KB ofmemory and no protection hardware. High-level languages and multiprogrammingwere simply too much for such a tiny system to handle. As the microcomputersevolved into modern personal computers, they acquired the necessary hardware andthen the necessary software to handle more advanced features. It is likely thatthis development will continue for years to come. Other fields may also havethis wheel of reincarnation, but in the computer industry it seems to spinfaster.