# Anatomy of an OS In the first lecture, we will first pose the question “what is an operating system” and give some short, but largely unsatisfactory answers. Since an operating system is a complex system, it is built from a number of components. Each of those components is described more easily than the entire operating system, and for this reason, we will attempt to understand an operating system as the sum of its parts. │ Lecture Overview │ │ 1. Components │ 2. Interfaces │ 3. Classification After talking about what is an operating system, we will give more details about its components and afterwards move on to the interfaces between those components. Finally, we will look at classifying operating systems: this is another angle that could help us pin down what an operating system is. │ What is an OS? │ │ • the «software» that makes the hardware tick │ • and makes other software easier to write │ │ Also │ │ • catch-all phrase for «low-level» software │ • an «abstraction layer» over the machine │ • but the boundaries are not always clear Our first (very approximate) attempt at defining an OS is via its responsibilities towards hardware. Since it sits between hardware and the rest of software, in some sense, it is what makes the hardware work. Modern hardware alone is rarely capable of achieving anything useful on its own. It needs to be programmed, and the basic layer of programming is provided by the operating system. │ What is «not» (part of) an OS? │ │ • firmware: (very) low level software │ ◦ much more «hardware-specific» than an OS │ ◦ often executes on auxiliary processors │ • application software │ ◦ runs «on top» of an operating system │ ◦ this is what you got the computer for │ ◦ e.g. games, spreadsheets, photo editing, ... One approach to understanding what «is» an operating system could be to look at things that are related, but are «not» an operating system, nor a part of one. There is one additional software-ish layer below the operating system, usually known as «firmware». In a typical computer, many pieces of firmware are present, but most of them execute on «auxiliary» processors – e.g. those in a WiFi card, or in the graphics subsystem, in the hard drive and so on. In contrast, the operating system runs on the main processor. There is one piece of firmware that typically runs on the main CPU: on older systems, it's known as BIOS, on modern systems it is simply known as “the firmware”. In the other direction, on top of an operating system, there is a whole bunch of «application software». While some software of this type might be «bundled» with an operating system, it is not, strictly speaking, a part of it. Application software is the programs that you use to get things done, like text editors, word processors, but also programming IDEs (integrated development environment), computer games or web applications (do I say Facebook?). And so on and so forth. │ What does an OS do? │ │ • «interact» with the user │ • «manage» and multiplex «hardware» │ • «manage» other «software» │ • «organises» and manages «data» │ • provides «services» for other programs │ • enforces «security» The tasks and duties that the operating system performs are rather varied. On one side, it takes care of the basic interaction with the user: a command interpreter, a graphical user interface or batch-mode job processing system with input provided as punch cards. Then there is the hardware, which needs to be managed and shared between individual programs and users. Installation of additional (application) software is another of the responsibilities of an operating system. Organisation and management of data is a major task as well: this is what file systems do. This again includes access control and sharing among users of the underlying hardware which stores the actual bits and bytes. Finally, there is the third side that the operating system interfaces with: the application software. In addition to the user and the hardware, application programs need operating services to be able to perform their function. Among other things, they need to interact with users and use hardware resources, after all. It is the operating system that is in charge of both. ## Components In this section, we will consider what an operating system «consists of», as means to understand what it «is». │ What is an OS «made of»? │ │ • the kernel │ • system libraries │ • system daemons / services │ • user interface │ • system utilities │ │ Basically «every OS» has those. Operating systems are made of a number of components, some more fundamental than others. Basically all of the above are present, in some form, in any operating system (excluding perhaps the smallest, most special-purpose systems). The kernel is the most fundamental and lowest layer of an operating system, while system libraries sit on top and use the services of the kernel. They also broker the services of the kernel to user-level programs and provide additional services (which do not need to be part of the kernel itself). The remaining layers are mostly made of programs in the usual sense: other than being a part of the operating systems, there isn't much to distinguish them from user programs. The first category of such programs are «system daemons» or «system services», which are typically long-running programs which react to requests or perform maintenance tasks. The user interface is slightly more complicated, in the sense that it consists of multiple sub-components that align with other parts here. The bullet point here summarises those parts of the user interface that are more or less standard programs, like the command interpreter. │ The Kernel │ │ • «lowest level» of an operating system │ • executes in «privileged mode» │ • manages all the other software │ ◦ including other OS components │ • enforces «isolation and security» │ • provides «low-level services» to programs The kernel is the lowest and arguably the most important part of an operating system. Its main distinction is that it executes in a special processor mode (often known as «privileged», «monitor» or «supervisor» mode). The main tasks of the kernel are management of basic hardware resources (processor, memory) and specifically providing those resources to other software running on the computer. This includes the rest of the operating system. Another crucial task is enforcement of isolation and security. The hardware typically provides means to isolate individual programs from each other, but it is up to the software (OS kernel) to set up those hardware facilities correctly and effectively. Finally, the kernel often provides the lowest level of various services to the upper layers. Those are provided mainly in the form of «system calls», and mainly relate (directly or indirectly) to hardware access. │ System Libraries │ │ • form a layer above the OS kernel │ • provide «higher-level» services │ ◦ use kernel services behind the scenes │ ◦ «easier to use» than the kernel interface │ • typical example: ‹libc› │ ◦ provides C functions like ‹printf› │ ◦ also known as ‹msvcrt› on Windows One rung above the kernel reside system libraries: among other things, they provide an interface between the kernel and higher levels of the system. The interface provided by the library to the application is also one level of abstraction above kernel services, which typically makes them easier to use. Like all libraries, they are «linked» into other programs and effectively become their part: as such, library code executes with the same privileges as application code – for functionality that is not purely computational, system libraries need to communicate with other parts of the operating system: either the kernel, or other privileged components (system services, also known as daemons). │ System Daemons │ │ • programs that run in the «background» │ • they either directly «provide services» │ ◦ but daemons are different from libraries │ ◦ we will learn more in later lectures │ • or perform «maintenance» or periodic «tasks» │ • or perform tasks «requested by the kernel» Daemons are long-running system programs: they take care of tasks which need to be done continuously or periodically, but at the same time do not need to reside in the kernel. This includes things like delivery of internet mail, remote access to the system (e.g. a secure shell daemon), synchronisation of the system clock (network time protocol daemon), configuration and leasing of network addresses (dynamic host control protocol daemon) and so on, printer spooling, domain name services, parts of the network file system, hardware health monitoring, system-wide logging and others. │ User Interface │ │ • mediates user-computer «interaction» │ • the main «shell» is typically part of the OS │ ◦ command line on UNIX or DOS │ ◦ graphical interfaces with a desktop and windows │ ◦ but also buttons on your microwave oven │ • also «building blocks» for application UI │ ◦ buttons, tabs, text rendering, OpenGL... │ ◦ provided by system libraries and/or daemons In most systems, application programs cannot directly drive hardware – it is therefore up to the operating system to provide an interface between the user (who operates the hardware) and the application program. This includes user inputs (like keystrokes, mouse movement, touchpad or touchscreen events and the like) and relaying the outputs of the application to the user (printing text, drawing windows on the screen, audio output, etc.). │ System Utilities │ │ • small programs required for OS-related tasks │ • e.g. system configuration │ ◦ things like the registry editor on Windows │ ◦ or simple text editors │ • filesystem maintenance, daemon management, ... │ ◦ programs like ‹ls›/‹dir› or ‹newfs› or ‹fdisk› │ • also bigger programs, like file managers Not all ‘short-running’ (i.e. non-daemon) programs are application software. There is a number of utilities which aid with the management of the operating system itself, configuration of services and of the underlying hardware and so on. Those utilities typically use the same type of interface that application software does – whether it is a command-driven interface or a graphical one. The distinction between bundled application software and system utilities is sometimes blurry: things like the «file explorer» in Windows is a fairly big and complicated program, but it also serves a rather central role in day-to-day use of the system. It can be imagined, however, that someone would take a program like the Explorer and port it to a different operating system. Perhaps the effort would be non-trivial, but the program would probably not appear out of place in another GUI-based OS. There is, however, a number of small programs with a clear-cut purpose, which are much easier to classify, like the network configuration tool ‹ifconfig› or the disk partitioning tool ‹fdisk›. Likewise with tools like ‹fsck› (or ‹chkdisk› on Windows), which are quite meaningless outside of the operating system that they came with. │ Optional Components │ │ • bundled «application» software │ ◦ web browser, media player, ... │ • (3rd-party) «software management» │ • a «programming» environment │ ◦ e.g. a C compiler & linker │ ◦ C header files &c. │ • source code It is often the case that an operating system comes bundled with programs which are not an integral part of the operating system, nor are they in any way involved in its normal operation. A typical example would be a small collection of games – a tradition that dates back to the original Berkeley UNIX, if not further into the past, and has been observed by almost all general-purpose operating systems since: the Solitaire and Minesweeper games that come bundled with Windows have an almost iconic status. Of course there is other software that falls into this category: arguably, MS Paint or Windows Media Player is not in any way essential to operate Windows, nor is a web browser. On UNIX, the software that has traditionally been bundled is of a slightly different nature and stems from the different target audience (and also from a different era): a C compiler, a linker and comparatively advanced source code editors are often found distributed with UNIX-like systems. In some cases, part of those tools was in fact essential, since the user would have had to compile a kernel tailored for their computer. This has, however, not been the case for a while, but most UNIX users expect to have a C compiler available anyway. Along with a C compiler would usually come header files for system libraries and other files needed to create and build your own programs. As I said, a different era. Finally, you may get the source code of the operating system: strictly speaking, this is not a software component, but rather a different (human-readable, instead of machine-executable) representation of the same operating system. It is quite useful if you want to learn the low-level details of how computers and operating systems work. ## Interfaces Another way to look at an operating system is from the point of view of its surroundings – what kinds of interfaces are there between the operating system and other components of a computer? │ Programming Interface │ │ • kernel provides «system calls» │ ◦ «ABI»: Application Binary Interface │ ◦ defined in terms of «machine instructions» │ • system libraries provide APIs │ ◦ Application «Programming» Interface │ ◦ symbolic / «high-level» interfaces │ ◦ typically defined in terms of «C functions» │ ◦ system calls also available as an «API» The most obvious interface of an operating system (if you are a programmer, anyway) is its API: the Application Programming Interface – as the name suggests, it is the functionality that application programs can obtain from the operating system. Most of this API is provided by «system libraries» – bundles of subroutines in the form of machine code that application programs (and system daemons and system utilities) can call into. Those subroutines often further communicate with the kernel, using a system-specific low-level protocol: this protocol is known as the ABI (Application Binary Interface) of the kernel. Programmers are, for the most part, not exposed to the details of the ABI. The API is typically described in terms of functions in a higher-level programming language: most often C, sometimes C++ or Objective C, rarely some other language. Programmers use those functions just like they use functions they have themselves implemented, but instead of providing definitions, the compiler translates them into calls into the corresponding machine code subroutines stored in system libraries. │ Message Passing │ │ • APIs do not always come as C functions │ • message-passing interfaces are possible │ ◦ based on «inter-process communication» │ ◦ possible even «across networks» │ • form of API often provided by «system daemons» │ ◦ may be also wrapped by C APIs Nonetheless, C functions (or C++ or Objective C or other programming language functions) are not the «only» API that there is. Often, there are interfaces described in terms of inter-process communication, most often some form of message passing. Those are often interfaces that are provided by system daemons, e.g. ‹syslogd› (usually a UNIX domain socket) or the mail daemon (often a TCP socket). │ Portability │ │ • some OS tasks require close «HW cooperation» │ ◦ «virtual memory» and CPU setup │ ◦ platform-specific «device drivers» │ • but many do not │ ◦ «scheduling» algorithms │ ◦ memory «allocation» │ ◦ all sorts of management │ • porting: changing a program to run in a «new environment» │ ◦ for an OS, typically new hardware It is desirable that operating systems can run on different hardware platforms: this reduces costs in a number of ways. The best (and cheapest) code is the code that you don't need to write – using the same operating system on different hardware platforms achieves exactly this (to a degree). On the other side, this saves resources of application developers, who can target a single operating system and reach a number of different hardware devices, and also training costs – users can be migrated from one hardware platform to another without extensive retraining for new software. While it is basically impossible to write an operating system in an entirely hardware-independent way, many of its components do not need to care about the particulars of the hardware platform. This includes even some of the core kernel components (again, to a degree). Any given thread scheduler can be often used without changes (and sometimes without any sort of additional tuning) on a different hardware platform; same goes for memory allocators, filesystem code and so on. Of course, there are also pieces that are closely tied to particular hardware: the boot sequence (which includes things like setting up the CPU and the virtual memory subsystem), device drivers (which are tied to particular devices) and so on. Finally, when we talk about portability, porting the operating system itself is not the only concern: it is often desirable to port application programs to run on a different software stack. In general, portability is the ability of a program to be (easily) adapted to a new environment. │ Hardware Platform │ │ • CPU «instruction set» (ISA) │ • buses, IO controllers │ ◦ PCI, USB, Ethernet, ... │ • «firmware», power management │ │ Examples │ │ • x86 (ISA) – PC (platform) │ • ARM – Snapdragon, i.MX 6, ... │ • m68k – Amiga, Atari, ... What is a hardware platform, then? It is a loose set of hardware and firmware that is often found together, with a degree of self-compatibility over time. From a software viewpoint, the specifics of the silicon don't matter, of course, only the protocols and interfaces exposed to software. The perhaps best-known hardware platform is the PC, dating back to the 80s IBM: it was originally built around Intel 8086 CPUs, one of two graphics adapters, a 5.25" floppy drive, and, rather importantly, «BIOS» – the firmware of the machine. The components have since been replaced (most of them more than once), but this was a gradual process, essentially providing continuity for the software stack to this day. Unlike most platforms, PC was atypical in one aspect: it was open, in the sense that companies outside of IBM were able to ship hardware that conformed to the same platform and hence could run the same software. Unlike the PC, which is essentially the only platform of consequence which uses x86 CPUs, other CPU architectures make an appearance in a number of different and incompatible platforms: among the historic would be the Motorola 68000-series processors which were used in Atari and Amiga computers, Sun and NeXT workstations. A modern day example would be the ARM CPU architecture, used mainly in mobile and other low-power devices, which appears in a number of different platforms. In this space, essentially each SoC (system on a chip) vendor has their own platform, with their own protocols, firmware and essential hardware devices. │ Platform & Architecture Portability │ │ • an OS typically supports many «platforms» │ ◦ Android on many different ARM SoC's │ • quite often also different «CPU ISAs» │ ◦ long tradition in UNIX-style systems │ ◦ NetBSD runs on 15 different ISAs │ ◦ many of them comprise 6+ different platforms │ • special-purpose systems are usually less portable Modern operating systems usually run on many different platforms, and often on multiple different CPUs (instruction sets). For instance, consider the Android OS, which runs on many different platforms (essentially each SoC vendor has their own platform) and 2 different CPU architectures (ARM and x86). The tradition was essentially started by UNIX, which was one of the first operating systems to be written in a ‘high-level’ programming language – one which could be compiled into machine code for different CPUs. Earlier operating systems were usually written in machine-specific assembly. │ Code Re-Use │ │ • it makes a lot of sense to re-use code │ • «majority» of OS code is «HW-independent» │ • this was not always the case │ ◦ pioneered by UNIX, which was written in C │ ◦ typical OS of the time was in machine language │ ◦ porting was basically ‘writing again’ Portability is a special case of «code re-use» – the code is written once and then used multiple times in different contexts, be it due to changes in hardware, or other aspects of the runtime environment. │ Application Portability │ │ • applications «care» more «about the OS» than about HW │ ◦ apps are written in «high-level languages» │ ◦ and use system libraries extensively │ • it is enough to port the OS to new/different HW │ ◦ most applications can be simply «recompiled» │ • still a major hurdle (cf. Itanium) In principle, most applications do not talk to the hardware directly and hence don't care much about the platform, and are written in languages which are not tied to a particular CPU architecture either. However, they do communicate with the operating system – and in many cases, this communication is a significant fraction of the code of the application. Considering this, it should be comparatively easy to port existing applications to new hardware, given that the same operating system runs both on their ‘native’ platform and on the new one. It is usually more than a simple recompile, but even for complicated applications, the effort is not huge. Differences in platform hardware (as opposed to peripherals and things like screen dimensions and resolution) are largely inconsequential for Android applications, and typical UNIX programs run unmodified on dozens of platforms after a simple recompilation. However, not all ecosystems are like that – consider the first major attempt to migrate PCs to a 64-bit architecture, the Itanium, in the early 2000s. This effort has failed, for a variety of reasons, but lack of portability of MS Windows (and of applications targeting MS Windows as their only supported OS) played an important role. │ Application Portability (2) │ │ • same application can often run on «many OSes» │ • especially within the POSIX family │ • but same app can run on Windows, macOS, UNIX, ... │ ◦ Java, Qt (C++) │ ◦ web applications (HTML, JavaScript) │ • many systems provide the same set of services │ ◦ differences are mostly in programming interfaces │ ◦ high-level libraries and languages can hide those Besides portability of application software to new hardware, which should be essentially free (though sometimes it isn't), applications can be ported to different operating systems. Within the POSIX family, this is often a formality at a level similar to porting to new hardware – these operating systems are, from the perspective of the application, all very much alike. This is, in fact, the main reason POSIX exists in the first place. A more involved process is porting between different operating systems which are not in the same family, say between Windows and POSIX. The APIs are very different, and on the application level, this is often completely impractical. Instead, if the ability to run ‘natively’ on different operating systems (outside of POSIX) is desired, applications can be written using a platform-neutral API implemented by a «portable runtime», which translates its own API into calls supported by any given host operating system. Such a portable runtime is, famously, part of the Java programming language (‘write once, run everywhere’). Other such runtimes, for other programming languages, exist: C++ has Qt, most high-level languages (Python, Haskell, JavaScript, …) have one built in. │ Abstraction │ │ • «instruction sets» abstract over CPU details │ • «compilers» abstract over «instruction sets» │ • «operating systems» abstract over «hardware» │ • portable runtimes abstract over operating systems │ • applications sit on top of the abstractions While we were discussing portability, a picture of an ‘abstraction tower’ has emerged: each layer hides the details of the previous layers, making it possible to largely ignore them when designing on top of the tower (or «stack», as it is often called). This enables portability (since the lower layers are hidden, they can be more easily replaced), but this is by far not the only reason why we build such towers. Arguably, the main reason for abstraction is «hiding complexity» – something as simple as writing ‘hello world’ to a file on disk takes hundreds of thousands of CPU instructions, yet we can do so with a single short command. This is what abstraction gives us. │ Abstraction Costs │ │ • more complexity │ • less efficiency │ • leaky abstractions │ │ Abstraction Benefits │ │ • easier to write and port software │ • fewer constraints on HW evolution Of course, abstraction is not free. Doing things indirectly always costs more: if we reduced the amount of abstraction, writing files would take thousands or tens of thousands of instructions, instead of hundreds of thousands. But writing even 2k instructions to write that file is 3 orders of magnitude too many to write by hand for such a simple and ubiquitous task. So we accept the overhead. A more sneaky problem is that of «leaky» abstractions: we like to pretend that each abstraction layer completely seals the layers below in a manner that we can't observe them. But this is never quite true: all abstractions are leaky to a degree – exposing details of layers below. If the surface of a magnetic drive is damaged, this will cause problems all the way up in the application, even though the interface we are using – read and write bytes to a file – is supposed to shield us from such low-level issues such as defects on a spinning platter covered in ferromagnetic dust. │ Abstraction Trade-Offs │ │ • powerful hardware allows more abstraction │ • embedded or real-time systems not so much │ ◦ the OS is smaller & less portable │ ◦ same for applications │ ◦ more efficient use of resources Clearly, there can be ‘too little’ abstraction (think of replacing your 3-line Python script with 25 thousand assembly instructions by hand). But there can also be too much – especially on constrained hardware, the overhead can be too great. Often the easiest way to buy efficiency is by reducing the amount of abstraction. ## Classification Our last attempt at understanding what an operating system is will revolve around various types of operating systems and their differences. │ General-Purpose Operating Systems │ │ • suitable for use in «most» situations │ • «flexible» but «complex» and big │ • run on both «servers» and «clients» │ • cut down versions run on «smartphones» │ • support variety of hardware The most important and interesting category is ‘general-purpose operating systems’. This is the one that we will mostly talk about in this course. The systems in this category are usually quite flexible (so they can cover everything that people usually use computers for) but, for the same reason, also quite complex. Often the same operating system will be able to run on both so-called ‘server’ computers (those mainly sitting in data centres providing services to other computers remotely) and ‘client’ computers – those that interact with users directly. Likewise, the same operating system can, perhaps in a slimmed down version, run on a smartphone, or a similar size- and power-constrained device. All current major smartphone operating systems are of this type. Historically, there were a few more specialised phone operating systems, mainly because at that time, phone hardware was considerably more constrained than it is today. Nonetheless, an OS like Symbian, for instance, could conceivably be used on personal computers assuming its hardware support was extended. │ Operating Systems: Examples │ │ • Microsoft Windows │ • Apple macOS & iOS │ • Google Android │ • Linux │ • FreeBSD, OpenBSD │ • MINIX │ • many, many others There is a whole bunch of operating systems, even of general-purpose operating systems. While running the OS itself is not the primary reason for getting a computer (application software is), it does form an important part of user experience. Of course, it also interfaces with computer hardware and with application programs, and not all systems run on all computers and not all applications run on all operating systems. │ Special-Purpose Operating Systems │ │ • «embedded» devices │ ◦ limited budget │ ◦ «small», slow, power-constrained │ ◦ hard or impossible to update │ • «real-time» systems │ ◦ must «react» to real-world events │ ◦ often «safety-critical» │ ◦ robots, autonomous cars, space probes, ... Besides general-purpose operating systems, there are other, more constrained systems. A typical example is «embedded systems», which run on very small amount of hardware (compared to modern general-purpose computers), with tight budget and with severe power constraints. In such systems, the comforts afforded by excessive abstraction are not available. This is especially true in «real-time» systems, which add constraints on computation time. Predictable timing is one of the things that are hardest to achieve in presence of abstraction (or in other words, precise timing of operations is one of the things that leak across abstraction boundaries). │ Size and Complexity │ │ • operating systems are usually large and complex │ • typically «100K and more» lines of code │ • «10+ million» is quite possible │ • many thousand man-years of work │ • special-purpose systems are much smaller We have mentioned earlier that general-purpose operating systems are usually large and complex. The smallest complete operating systems (if they are not merely educational toys) start around 100 thousand lines of code, but millions of lines is more typical. It is not unheard of that an operating system contains more than 10 million lines of code. These amounts clearly represent thousands of man-years of work – writing your own operating system, solo, is not very realistic. That said, special-purpose systems are often much smaller. They usually support far fewer hardware devices and they provide simpler and less varied services to the ‘application’ software. │ Kernel Revisited │ │ • bugs in the kernel are very bad │ ◦ system crashes, data loss │ ◦ «critical» security problems │ • bigger kernel means more bugs │ • third-party drivers inside the kernel? Let's recall that the kernel runs in privileged CPU mode. Any software running in this mode is pretty much all-powerful and can easily circumvent any access restrictions or security protections. It is a well-known fact that the more code you have, the more bugs there are. Since bugs in the kernel can have far-fetching and catastrophic consequences, it is imperative that there are as few as possible. Even more importantly, device drivers often need hardware access and the easiest (and sometimes only) way to achieve that is by executing in kernel (privileged) mode. As you may also know, device drivers are often of rather questionable quality: hardware vendors often consider those an after-thought and don't pay too much attention to their software teams. If those drivers then execute in kernel mode, this is a serious problem. Different OS vendors employ different strategies to mitigate this issue. Accordingly, we would like to make kernels small and banish as many drivers from the kernel as we could. It is, however, not an easy (or even obviously right) thing to do. There are two main design schools when it comes to kernel ‘size’: │ Monolithic Kernels │ │ • lot of code in the kernel │ • less abstraction, less isolation │ • «faster» and more efficient │ │ Microkernels │ │ • move as much as possible out of kernel │ • more abstraction, «more isolation» │ • slower and less efficient The monolithic kernel is an older and in some sense simpler design. A lot of code ends up in the kernel, which is not really a problem until bugs happen. There is less abstraction involved in this design, fewer interfaces and, in general, fewer moving parts for the same amount of functionality. Those traits then translate to faster execution and more efficient resource use. Such kernels are called monolithic because everything that a traditional kernel does is performed by a single (monolithic) piece of software. The opposite of monolithic kernels are microkernels. The kernel proper in such a system is the smallest possible subset of code that must run in privileged mode. Everything that can be banished into user mode (of the processor) is. This design provides a lot more isolation and requires more abstraction. The interfaces within different parts of the low-level OS services are more complicated. Subsystems are well isolated from each other and faults do not propagate nearly as easily. However, operating systems which use this kernel type run more slowly and use resources less efficiently. │ Paradox? │ │ • real-time & embedded systems often use microkernels │ • isolation is good for reliability │ • efficiency also depends on the «workload» │ ◦ throughput vs latency │ • real-time does not necessarily mean fast Finally, there is a bit of a paradox around microkernels: it is a fact that they are often used in embedded (real-time) systems – some of the most performance-critical software stacks around. However, one thing is more important than performance when it comes to embedded software: reliability. And reliability is where microkernels shine. Additionally, even in hard real-time systems, where we often consider performance to be paramount, raw speed is a bit of a red herring – what is important is latency, and even more important is an upper bound on this latency. Providing one is, however, much easier in a microkernel system, where the code base is small and much easier to reason about. │ Review Questions │ │ 1. What are the roles of an operating system? │ 2. What are the basic components of an OS? │ 3. What is an operating system kernel? │ 4. What are API and ABI?