Unix’s de­vel­op­ment is no doubt one of the most important mile­stones in this history of computing. The operating system not only in­tro­duced some of today’s most el­e­men­tary concepts in in­for­ma­tion tech­nol­o­gy, such as the hi­er­ar­chi­cal­ly struc­tured file system, it also has served as the basis for numerous other systems, like Apple’s macOS and iOS, or the open source Linux. In turn, this has led to the emergence of numerous de­riv­a­tives, like Ubuntu, Debian, or mobile Android. But how exactly did Unix become one of the most in­flu­en­tial computer man­age­ment ap­pli­ca­tions, and why was its de­vel­op­ment team able to ex­clu­sive­ly record ideas initially on black­boards and notepads?

Multics joint project laid the foun­da­tions

In 1965, a working group presented their idea for a new operating system at the Joint Computer Con­fer­ence. The group consisted of employees from the Mass­a­chu­setts Institute of Tech­nol­o­gy (MIT), General Electric, and Bell Lab­o­ra­to­ries (Bell Labs) or AT&T (part of Nokia’s research and de­vel­op­ment de­part­ment since 2016). They named the operating system Mul­ti­plexed In­for­ma­tion and Computing Service, or Multics for short. They pursued com­plete­ly new ap­proach­es, focusing on time-sharing in par­tic­u­lar. Multics was among the first systems to allow multiple users to work si­mul­ta­ne­ous­ly on one computer by sharing the un­der­ly­ing processor’s computing time.

The Multics working group needed a computer with specific re­quire­ments to get their project off the ground: on the one hand, it had to have clearly formatted in­struc­tions to be able to use the higher pro­gram­ming language PL/I from IBM intended for de­vel­op­ment. On the other hand, it had to support the planned multi-user operation and work asyn­chro­nous­ly to minimize per­for­mance losses in memory man­age­ment. For this reason, the GE-635 and later the GE-645 from General Electric were selected. The de­vel­op­ment was carried out on the multi-user system CTSS, which had been developed by MIT back in the 1960s and was already up and running. Delays in de­vel­op­ment of the PL/I compiler, financial bot­tle­necks, internal dif­fer­ences, and growing external pressure even­tu­al­ly led Bell Labs to withdraw from the project in 1969.

Multics becomes Unix

Multics was developed further at MIT and later dis­trib­uted com­mer­cial­ly on Honeywell 6180 machines by Honeywell In­ter­na­tion­al Inc., after its ac­qui­si­tion by General Electric (until 1986). However, the computer scientist Ken Thompson, who was an employee at Bell Labs at the time, could not let go of a multi-user system: together with Dennis Ritchie and a small team at AT&T, he began planning his own system, based on Multics prin­ci­ples. But the search for a suitable computer initially proved to be fruitless – and as Bell Labs resisted the purchase of a suitable copy, the de­vel­op­ers began recording their notes and progress for a planned file system on notepaper and black­boards.

Finally, a used PDP-7 mini­com­put­er from Digital Equipment Cor­po­ra­tion (DEC) was acquired for the planned project. This computer system, which was “only” the size of a wall unit, ran with GECOS (General Electric Com­pre­hen­sive Operating System), which served as a de­vel­op­ment platform from then on. Valuable software tools like a command line (SH) and editor (ED) and the already existing file system in paper form were quickly developed – initially still in an assembly language (hardware-oriented, but sim­pli­fied for humans). Since the new operating system only allowed two users to work on a process at the same time (unlike Multics), the team named it Unics based on the template. Due to lim­i­ta­tions for file name lengths in GECOS, the final name Unix was decided upon.

First B, then C: Unix gets its own higher pro­gram­ming language

After the Bell Lab team had written Unix and some other el­e­men­tary programs, it was time to replace the assembly language used for this purpose with a less complex variation. However, the plan to develop the pre-existing IBM language Fortran was rejected after a short time. Instead, work began on their own language and was strongly oriented towards PL/I – the Multics language – and the BCPL (basic combined pro­gram­ming language) developed at MIT. Sub­se­quent­ly, Ritchie and his col­leagues rewrote some of the system tools in this language until they even­tu­al­ly received a new PDP-11 computer in 1970, and were once again forced to rethink their technique. This was because the new system ar­chi­tec­ture was not word oriented like the PDP-7 computer and the pro­gram­ming language B, but was instead byte oriented.

In the next two years, Bell Labs developed the successor C, whose syntax and other features can be found in numerous modern pro­gram­ming languages like c++, JavaScript, PHP, or Perl. When the language was mature enough in 1973, the de­vel­op­ment team started rewriting the complete Unix kernel in C. The result was published by the Unix team in the mid-1970s. Since AT&T was not allowed to sell any software at the time, being a state con­trolled telecom­mu­ni­ca­tions industry, Unix (version 6) which was a multi-user system that also allowed several processes si­mul­ta­ne­ous­ly, was made available to all in­ter­est­ed uni­ver­si­ties free of charge – including a C compiler, which made the system usable on almost all platforms.

Hardware friendly, and open source: Unix conquers the developer scene

With the release of Unix software for ed­u­ca­tion­al in­sti­tu­tions, the success of the new operating system quickly became more and more apparent, initially as a toy among pro­gram­ming circles. Common work processes on the IBM main­frames and PDP machines during that time continued to run on native systems like RSX-11, RT-11 or IST. For de­vel­op­ers, though, the value of the source code provided by the kernel and the in­di­vid­ual ap­pli­ca­tions was not just a learning effect: the low demands Unix made on hardware and its high usability en­cour­aged ex­per­i­men­ta­tion and further de­vel­op­ment, which was par­tic­u­lar­ly well received by the Uni­ver­si­ty of Cal­i­for­nia, Berkeley (Thomson’s former home uni­ver­si­ty) – although the fact that he took up a guest pro­fes­sor­ship in its newly created computer science faculty in 1976 probably played a sig­nif­i­cant role.

Bill Joy and Chuck Haley, two graduate students at the time, improved the Pascal system developed by Thompson and pro­grammed a com­plete­ly new text editor with ex – the pre­de­ces­sor of vi, which can still be found in unixoid system standard in­stal­la­tions today. In 1977, under Joy’s direction, a modified variant of Unix appeared, which contained the im­prove­ments and further de­vel­op­ments made so far. The Berkeley Software Dis­tri­b­u­tion (BSD), which later in­te­grat­ed the TCP/IP network protocol into the Unix universe, and was able to meet the re­quire­ments of a free operating system for the first time (thanks to its own BSD license), and is con­sid­ered to be one of the most important Unix mod­i­fi­ca­tions to date.

The 1980s: com­mer­cial­iza­tion and the Unix wars

In the following years more and more mod­i­fi­ca­tions were developed, including ones that focus on other aspects, like finance. For example, Microsoft acquired a Unix V7 license in 1979 to develop ports for Intel and Motorola proces­sors, among other things. In the following year, they released Xenix, which was orig­i­nal­ly planned as a standard operating system for PCs but ended up placing hardware demands that were too high. Microsoft finally placed further de­vel­op­ments in the hands of the software man­u­fac­tur­er SCO (Santa Cruz Operation) to con­cen­trate on OS/2 and further de­vel­op­ment of MS-DOS.

Bill Joy also jumped on the bandwagon in 1982 with his newly founded company Sun Mi­crosys­tems, using the pro­pri­etary BSD-based system SunOS (pre­de­ces­sor of Solaris), which was specif­i­cal­ly designed to use on servers and work­sta­tions.

However, the real battle for Unix fans was fought between AT&T, which by now had received com­mer­cial dis­tri­b­u­tion per­mis­sion, and Berkeley Uni­ver­si­ty, which was able to highlight valuable in­no­va­tions, thanks to their large number of sup­port­ing pro­gram­mers. AT&T first tried to conquer the market with System III (1981) and with the new optimized version of System V (1983), both of which were based on Unix V7. The Uni­ver­si­ty of Berkeley then si­mul­ta­ne­ous­ly released 4.3BSD, for which 1,000 licenses were issued within 18 months. This made it much more popular than the paid System V, which lacked the file fast system (FFS) and the network ca­pa­bil­i­ty (thanks to in­te­grat­ed TCP/IP) of Berkeley’s variant.

With System V’s fourth release (1988), AT&T im­ple­ment­ed these two and many other BSD features, as well as for Xenix and SunOF, which led to many users switching to the com­mer­cial option.

Thanks, Penguin: Unix becomes a server solution

While different Unix systems initially competed with each other for sales and loyalty, Apple and Microsoft began their rivalry in the personal computer sector and later in the server field. While Microsoft won the race when it comes to home PCs, a system based on Unix concepts suddenly appeared on the scene in 1991 with Linux, which in the following years won over the server en­vi­ron­ment. Thanks to the freely licensed kernel and freely available GNU software, the developer Linus Torvalds had fulfilled the desire for a com­pet­i­tive open source operating system and won over the market at the time. Until today, numerous Unix Linux de­riva­tions like Debian, CentOS, Red Hat, or Ubuntu are used as system software for all kinds of servers. Ubuntu in par­tic­u­lar is becoming more and more popular for home PCs. Linux, which we have an article on is by far not the only important Unix successor in today’s software world: since Mac OS X 10.0 or Mac OS X Server 1.0, the Apple operating system uses Darwin, a free BSD variant, as a sub­struc­ture. Berkeley Unix itself is even rep­re­sent­ed several times with numerous other free de­riv­a­tives like Free BSD, Open BSD, or Net BSD. With iOS (same system base as macOS) and Android (based on Linux kernel), the two most widely used operating systems for mobile devices also belong to the Unix family.

What is Unix? The most important milestone features of the system

When it was in­tro­duced, many of Unix’s dis­tin­guish­ing features were absolute novelties that were not just intended to influence the de­vel­op­ment of unixoid systems and dis­tri­b­u­tions, but were also taken up by com­peti­tors Apple and Microsoft in their operating systems. Es­pe­cial­ly when you take the following char­ac­ter­is­tics into con­sid­er­a­tion, Richie, Thompson, and their col­leagues involved with Unix were pioneers of modern operating systems at that time:

Hi­er­ar­chi­cal, universal file system

An el­e­men­tary part of Unix right from the beginning was the hi­er­ar­chi­cal­ly-organized file system, which allows the user to structure files into folders. Any number of sub­di­rec­to­ries can be assigned to the root directory, which is marked with a “/”. Following the basic principle of “Every­thing is a file,” Unix also maps drives, hard disks, terminals, or other computers as device files in the file system. Some de­riv­a­tives, including Linux, even mark processes and their prop­er­ties as files in the procfs virtual file system.

Mul­ti­task­ing

Another decisive factor in Unix’s success was the ability to execute several processes or programs si­mul­ta­ne­ous­ly without them in­ter­fer­ing with each other. The operating system was based on the method of pre-emptive mul­ti­task­ing right from the start. With this method, the scheduler (which is part of the operating system kernel) manages the in­di­vid­ual processes through a priority system. It was only much later during the 1990s that Apple and Microsoft began im­ple­ment­ing com­pa­ra­ble process man­age­ment solutions.

Multi-user system

Even Multics’ main goal was a system that would allow several users to work si­mul­ta­ne­ous­ly. To do this, an owner is assigned to each program and process. Even if Unix was initially limited to two users, this feature was part of the system software portfolio right from the start. The advantage of this kind of multi-user system was not just the op­por­tu­ni­ty to access the per­for­mance of a single processor at the same time, but also the as­so­ci­at­ed rights man­age­ment. Ad­min­is­tra­tors can now define access rights and available resources for different users. Initially, however, it was also a pre­req­ui­site that the hardware of each re­spec­tive computer was involved.

Network ca­pa­bil­i­ty

With 4.2BSD, Berkeley’s Unix became one of the first operating systems to integrate the internet protocol stack in 1983, providing the foun­da­tion for the internet and simple network con­fig­u­ra­tion, and the ability to act as a client or server. In the late 1980s, the fourth version of System V (already mentioned) was also a variety of the com­mer­cial AT&T system, which adds the kernel to the legendary protocol family. Windows should only support TCP/IP with 3.11 (1993) and an ap­pro­pri­ate extension.

Platform in­de­pen­dence

While other operating systems and their ap­pli­ca­tions were still tailored to a specific processor type at the time Unix was created, the Bell Labs team pursued the approach of a portable system right from the start. Although the first language was an assembly language, the project created a new, higher pro­gram­ming language as soon as the basic structure of the systems software was created. This language was the pre­de­ces­sor of the his­tor­i­cal C language. Although the com­po­nents written in C were still strongly bound to PDP machine ar­chi­tec­ture, which Ritchie and his col­leagues used as a basis for their work, despite the included compiler. Lately, with the strongly revised Unix V7 version (1979), however, the operating system rightly earned its rep­u­ta­tion as a portable system.

The Unix toolbox principle and the shell

Unix systems combine a multitude of useful tools and commands, which are usually only designed for a few special tasks. For example, Linux uses GNU tools. For general problem solving, the principle is to find answers in a com­bi­na­tion of standard tools instead of de­vel­op­ing specific new pro­gram­ming. The most important tool has always been the shell (SH), a text-oriented command in­ter­preter that provides extensive pro­gram­ming options. This classic user interface can also be used without a graphic user interface, even if that kind of interface naturally increases user comfort. However, the shell does offer some sig­nif­i­cant ad­van­tages for ex­pe­ri­enced users:

  • Sim­pli­fied operation thanks to in­tel­li­gent auto-com­ple­tion
  • Copy and paste functions
  • In­ter­ac­tive (direct access) and non-in­ter­ac­tive (execution of scripts) states are usable
  • Higher flex­i­bil­i­ty, since the in­di­vid­ual ap­pli­ca­tions (tools, commands) can be combined almost freely
  • Stan­dard­ized and stable user interface, which is not always guar­an­teed with a GUI
  • Script work paths are au­to­mat­i­cal­ly doc­u­ment­ed
  • Quick and easy im­ple­men­ta­tion of ap­pli­ca­tions

Con­clu­sion: if you want to un­der­stand how operating systems work, take a look at Unix

The rise of Microsoft and Apple, directly linked to their creators Bill Gates and Steve Jobs, is un­doubt­ed­ly un­par­al­leled. However, the foun­da­tion of these two giant success stories was laid by the pi­o­neer­ing work of Dennis Ritchie, Ken Thompson, and the rest of the Unix team between 1969 and 1974. Unix does not just produce its own de­riv­a­tives, but also in­flu­ences other operating systems with concepts like the hi­er­ar­chi­cal­ly struc­tured file system, the powerful shell, or high porta­bil­i­ty. To implement the latter, the most in­flu­en­tial pro­gram­ming language in computer history, C, was developed almost in passing.

To be aware of the pos­si­bil­i­ties of language and general operating system func­tion­al­i­ty, there is no better il­lus­tra­tive object than a Unix system. You do not even have to use one of the classic variants: Linux dis­tri­b­u­tions like Gentoo or Ubuntu have adapted to modern demands without giving up basic features like maximum control over the system. You are somewhat more limited in your pos­si­bil­i­ties with the beginner-friendly macOS, which masters the balancing act between the powerful Unix base and a well-designed graphic user interface with flying colors.

Go to Main Menu