Operating systems

Are microkernels the future of secure OS design?

MINIX 3, and microkernel OSes in general, might show us the way toward the future of secure OS design.

MINIX 3, and microkernel OSs in general, might show us the way toward the future of secure OS design.


Andrew S. Tanenbaum created the MINIX operating system in the 1980s as an instructional tool. He wanted to provide an aid to helping people learn about the design of operating systems, and did so by creating an incredibly simple -- in some respects, too simple for any serious use -- Unix-like OS from scratch, so that his students could examine and modify the source code and even run it on the commodity hardware of the time. He and Al Woodhull co-authored a textbook that focused on the MINIX OS.

Since then, 1.5 and 2.0 versions of MINIX have been created as well, with the same goal: to serve as a study aid. By contrast, the current iteration of the MINIX project, known as MINIX 3, is being developed with the aim of providing a production-ready, general purpose Unix-like operating system. In Tanenbaum-Torvalds debate, Part II, Andrew Tanenbaum describes the relationship of MINIX 3 to its forebears:

MINIX 1 and MINIX 3 are related in the same way as Windows 3.1 and Windows XP are: same first name. Thus even if you used MINIX 1 when you were in college, try MINIX 3; you'll be surprised. It is a minimal but functional UNIX system with X, bash, pdksh, zsh, cc, gcc, perl, python, awk, emacs, vi, pine, ssh, ftp, the GNU tools and over 400 other programs.

Among Tanenbaum's key priorities in the development of MINIX 3 is the desire to improve operating system reliability to the level people have come to expect from televisions and automobiles:

It is about building highly reliable, self-healing, operating systems. I will consider the job finished when no manufacturer anywhere makes a PC with a reset button. TVs don't have reset buttons. Stereos don't have reset buttons. Cars don't have reset buttons. They are full of software but don't need them. Computers need reset buttons because their software crashes a lot.

He acknowledges the vast differences in the types of software required by these different products, but is optimistic that the shortcomings imposed by the necessarily greater complexity of software in a desktop computer can be mitigated to the point where end users will no longer have to deal with the crashing behavior of operating system software. As he puts it, "I want to build an operating system whose mean time to failure is much longer than the lifetime of the computer so the average user never experiences a crash."

There are already OSs that meet that goal under extremely limited, tightly constrained circumstances, but he intends to provide the same stability and reliability under the harsh, changeable circumstances of general-purpose consumer computer use. It is an ambitious goal, but his approach is well thought out and shows great promise. Given that many of the stability issues we see in operating systems using monolithic kernels result from the fact that a crashing driver can crash the OS kernel itself, his intention to solve the problem by moving drivers out of kernel space may be on the right track.

In fact, if a driver fails to respond to what he calls a "reincarnation server", which monitors the status of active drivers, the reincarnation server can silently swap out the hung or otherwise failing driver process for a fresh process that picks up where the original left off. In most -- if not all -- cases, the user may not even know anything has happened, where a monolithic kernel OS would likely crash immediately.

While his primary interest is in reliability, rather than security per se, there are obvious security benefits to his microkernel design for MINIX 3. Without going into too much technical detail, there are mechanisms in place that restrict a driver's access to memory addresses, limiting it to its own assigned address space in a manner that in theory cannot be circumvented by the driver. Drivers all run in user space, keeping them out of kernel space, whereas drivers loaded as part of a monolithic kernel design can typically touch any memory addresses at all.

A number of countermeasures have been invented and employed in various OS designs over the years, but keeping drivers out of kernel space entirely certainly seems like the most effective and inviolate protection against driver misbehavior. As a result, this may all but eliminate the possibility of buffer overruns and similar problems in kernel space.

Further, mediated contact between drivers and other user space systems in MINIX 3 extends the strict privilege separation of Unix-like systems to protect other processes than the kernel process, as well. In short, whole classes of common system vulnerabilities may become extinct within the family of OSs that may develop in the wake of MINIX 3.

MINIX 3 itself is still in development, but it is currently a working OS with many of Tanenbaum's intended reliability assurance features already implemented. You can download it from the MINIX 3 Website and boot it from a LiveCD though, as Tanenbaum states, you should install it to a partition on the hard drive of a computer if you want to do anything useful with it.

It will not replace MS Windows, MacOS X, Ubuntu Linux, or FreeBSD right now, but it may well be the future of general purpose OS design -- especially secure general purpose OS design.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

38 comments
garegin
garegin

While I agree with Dr. T on reliability. He is however barking at the wrong tree. microkernels will only lessen kernel crashes, which are extremely rare on modern OSes. 90% of crashes on Windows, Mac or Linux are not kernel crashes, they are misbehaving applications stalling the OS or other applications. We need better application containment via stricter memory management and sandboxing.

AnsuGisalas
AnsuGisalas

I'm just wondering, if people want to keep running their big bloated OS's, would it be preferable to have it in a VM running on top of something that's secure/stable/viable? EDIT:clarity

Ocie3
Ocie3

Mr. Perrin, thank-you for writing it. Note: [i]".... Drivers all run in user space, keeping them out of kernel space, whereas kernels loaded as part of a monolithic kernel design can typically touch any memory addresses at all."[/i] I think you meant "... whereas [i]drivers[/i] loaded as part of a monolithic kernel design can typically touch any memory addresses at all."

Ocie3
Ocie3

TVs, stereos, and cars aren't general purpose computers which execute climate prediction models, calculate orbits for interplanetary spacecraft, or do much of anything else as nearly as complex as those tasks. Some people want to consider computers as household appliances, which reminds me that there is what appears to be a toaster in the kitchen of this household that doesn't have any apparent means to lower the slice of bread into position for toasting. Maybe it, like many cars, is operated by a "computer" of some kind. Maybe it needs a reset button but doesn't have one. Which is [i]not[/i] to argue that computers should not have stable operating systems, only to remind Andrew S. Tanenbaum that MINIX is not necessarily only run on stable hardware. Hardware error states typically require that the entire computer system be restored to an original "good" state, thus the reset button (which may also be required if the OS enters an error state). Stereos, TVs, and cars have one of four states: either on/off or operable/inoperable. If it is inoperable, then you'd be lucky if all you had to do to "fix it" is push a reset button! But it seems that the technology in those devices has not advanced far enough to enable us to do that, has it?

Parrotlover77
Parrotlover77

Every few years, another microkernel miracle appears to save IT from the users. At least this time, it's not HURD getting the unwarranted press! Color me a skeptic, as I will see it when I believe it. But, still, I wish the developers good luck!

jkovacsrtc
jkovacsrtc

Yes, exactly the microcernel and Universal Process Model the future's OS architecture. The QNX Neutrino OS exactly covers these requirements. Everyone can download the full OS/Development environment free of charge with perpetual license from www.qnx.com for non-commercial use.

seanferd
seanferd

I thought it was going to be stuck at 3.1.2a forever. I'm glad to see there has been further development. I must not have checked back for a while, but it can't have been that long. OK, checking the release dates, I swear I never saw any newer releases since I first played with MINIX, and there is no way I haven't looked at that page for 3-4 years. Odd.

alanmcrae
alanmcrae

My studies of IT vulnerabilities have led me to conclude that the internet is inherently insecure, that users (aka consumers) exhibit poor security judgment that renders security infrastructure highly ineffective, and that the bad guys have more than enough knowledge & tools to steal any data that has value in the global marketplace. Given these realities, a new class of information products needs to emerge with hardened microkernels and separate accounts & interfaces for user mode & administrative mode. These highly secure devices need to prevent normal daily use from exposing the OS and applications to casual modification by the device user. I mean, heck, you are not able to change the computer system in your car every time you go for a drive, and even a professional airline pilot is not empowered to reprogram the flight control system in a jet airliner while flying. It is the absurd expectations of uninformed consumer IT users that has turned every user into the admin for every device and poked a zillion holes in the security of every network. Hackers love this situation because it cannot be secured against them. What is needed is a new model for the information device that takes into account naive, foolish actions by typical product users. Then we will stand a chance of protecting valuable data from those who would steal it so easily.

wizard57m-cnet
wizard57m-cnet

Limited function, tightly controlled systems are one thing, but when you extrapolate to "general-purpose consumer computers" you run into the same problems that other OSes experience...the endless combinations of components, accessories, software and user expectations. If users were of a mind to use one computer for internet, one for pictures and music, one for writing letters, another for working on the family budget...well, you get the picture. As consumers, we want all that in one "computer". Sure, we have gadgets that allow us to interact via "mobile" access, but there again, we no longer accept just a cellphone that makes phone calls, and carry an email device, and yet another for a calculator. That said, I may have to give Minix 3 a trial run, it's been a LONG time since I played with Minix 1.

jkameleon
jkameleon

De-bloatization of the OS, so to speak. I said "not a moment too soon", because time-to-market driven bloatware started to creep into other products as well. Cars have to be reset occasionally, buy turning ignition off and on. Software updates are delivered at their regular maintenance. I'm a proud owner of a TV set, which periodically hooks itself into the home wireless network, and checks for software updates. I've seen it crash a couple of times. Had to unplug it, and plug it back again.

apotheon
apotheon

Some hypervisors are actually, essentially, microkernel OSes running the VMS on top of those microkernels. In cases where you basically want to do everything in VMs, rather than having a usable OS with VMs inside it, that's a pretty good way to handle things.

apotheon
apotheon

Thanks for catching that. I'll fix it post-haste.

apotheon
apotheon

Hardware error states typically require that the entire computer system be restored to an original "good" state, thus the reset button (which may also be required if the OS enters an error state). That's sorta Tanenbaum's point, though. If the part of the OS that deals with a given piece of hardware can be reloaded without destroying the running state of the rest of the system, you don't need to restart the entire OS. Stereos, TVs, and cars have one of four states: either on/off or operable/inoperable. If it is inoperable, then you'd be lucky if all you had to do to "fix it" is push a reset button! But it seems that the technology in those devices has not advanced far enough to enable us to do that, has it? The reason things work that way is that when something is inoperable in a mechanical device, you don't have a software problem; you have a hardware problem. An inoperable CV joint is equivalent to a torn SATA cable -- not a memory access error. You "fix" a memory access error by pressing the reset button. You fix a torn SATA cable by turning off the computer, removing the cable, and replacing it. These are two entirely different classes of problem, and Tanenbaum's work with MINIX 3 is intended to address the software problem, not the hardware problem. By the way, I fibbed a little when I said that pressing the reset button fixes a memory access error. It doesn't fix the actual problem; it just applies a little analgesic to a pain symptom. It's like taking Tylenol to deal with a little internal bleeding. The Unix philosophy of software development, coupled with the Unix model of OS design, eliminates a lot of the problems that require a reset button on computers. People who use nothing but MS Windows are generally completely unfamiliar with the idea that restarting the whole computer should essentially be unnecessary. Even if the system doesn't crash or freeze up in a given week, thus requiring a restart, it still needs restarts occasionally when configuration changes are made or when software is installed or updated. Some of the same is true in the world of Unix-like OSes, but in far fewer cases. You still need to restart a computer when updating a driver that is compiled into the kernel, or when such a driver craps out entirely, though most of the time using driver modules that are loaded at runtime instead of having them compiled into the kernel can work around this problem. You still need to restart a computer when the kernel itself has a problem significant enough that it doesn't recover after limping along for a little bit. You don't need to restart the computer when some userland application goes nuts or crashes, though, unless there's a problem with some driver that it uses, allowing the application to crash or hang the driver, or otherwise pass its problems on to something else. The same goes for the GUI itself, which can be killed and restarted -- or not restarted, if all you need to do for now is some command line work anyway. The point is this: with MS Windows, the reset button is a definite requirement, because of all the tentacles reaching across the barrier between "user" and "kernel"; with proper Unix-like systems, it's not so much a requirement as a convenience, for those rare occasions when a problem that arises actually causes a problem for the kernel and its compiled-in drivers; for the system Tanenbaum is designing, it won't even make sense to have a reset button, because none of that stuff is tied to the kernel at all. Just as a few applications can crash in MS Windows without crashing the GUI, so too can basically anything crash in the OS Tanembaum has set out to create without the OS itself crashing. Hardware, however, is another story. If someone lets the magic smoke out of your motherboard, it's crashing, and that's all there is to it. No reset button will do anything for a problem like that.

apotheon
apotheon

I wonder how much of a spike in traffic this will bring. I'm pretty sure it'll be nontrivial, at least.

apotheon
apotheon

HURD is actually, basically just a wrapper around the Mach microkernel -- and it's not a very old microkernel system, anyway. Microkernels are by no stretch a new thing. Minix 3 is just doing interesting things that point to a possible, reasonably new, way to secure our systems against malicious activity.

apotheon
apotheon

I want to play with it a bit on bare metal, actually. I would quite like to know how much of my day-to-day could be accomplished with Minix 3.

Jaqui
Jaqui

if you remove the microkernel part of the os description, you get most of the BSDs fitting the description fairly well. maybe if they were to improve the "user friendliness" of them until the average shmoe could install and use them there would be a chance for people to really have stable systems. :D

apotheon
apotheon

There must still be a means for the user -- if it's his/her machine -- to poke around in the guts of the system and modify it at whim. Otherwise, the end user will never develop any greater security awareness than currently possessed. As has been pointed out many times, the user is generally the weakest link the security of a system, and anything we can reasonably do to improve the strength of that link is a good thing.

wizard57m-cnet
wizard57m-cnet

That's why social engineering scams prove profitable to the miscreants. Perhaps, instead of re-engineering the devices, we should re-engineer the network utilized for commerce? I know I'm showing my age, but way back when, no "commercial enterprise" was tolerated on the old BBS systems and early Internet. We were "online" primarily to dessimenate knowledge, data, and ideas. A few of us did figure out how to play text-mode games, but at 300 baud any type of advertising junk was dealt with quickly, and usually resulted in the user(s) being banned, if not just the user but the phone number/access point as well. As for securing devices, it cannot be done...period. Apple has tried with their iPhone/iPad, didn't take long for "jailbreaks" to find their way online, did it? Outside a MAJOR educational effort by all those involved with the internet to teach users how to avoid becoming a victim, the best we can sometimes hope for is minimizing the attack vectors. It's just too difficult to forecast the stupidity of users.

malcolmspiteri
malcolmspiteri

That is exactly what a uKernels does; it isolates each specialised device (sound card, network, graphics) and its associated driver so that they don't disrupt each other. The computer logically becomes a number of "tightly controlled, limited function" systems in one box.

AnsuGisalas
AnsuGisalas

I just wonder... so many security worries coming off big OSes, and so many hardware issues too. I just wonder if it would be more useful to have a very very machine-oriented kernel handling the actual machine, and leaving the user-oriented stuff to a UI-OS running in a VM. Somehow it seems like it would be wasteful (running in effect three systems [Machine OS - VM - User OS] to replace "one" system), but on the other hand, trying to do two completely different things at once can be wasteful too... as evidenced by supabloat. Such an approach could also open up the software field dramatically, blurring lines between OSes if the VM handler can run different environments for different programs... but that may be me going for a very long walk off a short pier, not knowing at all what's entailed. :p

robo_dev
robo_dev

CG Linux, SecureUnix, as well as the UNIX kernels that are part of most security appliances are rock-solid, EAL and NIST certified operating systems. Almost all firewall appliances use some sort of hardened UNIX kernel.

Jaqui
Jaqui

and it's likely changing the kernel, mach being to slow to develop for them. :D I still think that clearing ram and allocating it, scheduling processes needs to be kernel space myself. Hurd wants to have that all user space.

wizard57m-cnet
wizard57m-cnet

the requirements were quite low...a Pentium CPU or equivalent with 16 meg RAM...maybe I should dust off one of my old boxes in the attic, hehe! The biggest drawbacks with Minix at the time I tinkered was everything was text based, no problem for me, but for someone else, especially those accustomed to graphics on the internet and WWW, trying to get the hang of Lynx might drive them batty. That's probably where we would notice the shortcomings, with the internet apps. Email, FTP, telnet, NNTP and such would probably be "OK", but take some getting used to. The web browser would probably leave some folks in tears...I've used Lynx quite a bit, even in, are you ready for this...you sure...don't laugh now, darn it...plain old MS DOS! I used to do all my web based email at Yahoo and ZDNet Onebox with Lynx 386 DOS, running on an old IBM PS2 Model 5500SX 386, 5 meg ram, MS DOS 6.22, 29 meg HD, and a super fast 2400 BAUD modem, hehe! It took a bit to get all the little config options just the way I wanted, but once I got Lynx 386 setup, it worked quite well. I even ran a similar setup on a Pentium 166 w/96 meg RAM, but faster modem. I would boot that to plain DOS of Win95, and either browse with Lynx or use Arachne if I wanted graphics. I digress though...Minix may have progressed to the point of supporting a bit more current web standards as far as browser support, I don't know. Guess it's time to re-visit their site!

YetAnotherBob
YetAnotherBob

It's already been done, it's called "Linux". In the extreme form of "User Friendly" Linux, it is only marginally more secure than Windows. There is also a Debian version of BSD. It is set up a little better, but to dumb down the security for dumbed down users, you still lose the major security features.

apotheon
apotheon

You don't need "user friendly" installation. You need pre-installed systems. Just as average end users don't want to have to fix stuff that breaks in ways a dedicated device (like a garage door opener) doesn't break, average end users don't want to have to install software -- like they don't have to install software on a garage door opener. The average schmoe can already use a BSD Unix system, if it's effectively installed and set up for him or her.

apotheon
apotheon

If we endeavor to empower users to secure their systems, their systems can become better secured. As a result of that, the security of the online world as a whole will be incrementally improved. I speak of things like offering well-designed password managers to end users, rather than shrugging our shoulders and assuming that we should accommodate the tendency of end users to choose weak passwords. Ultimately, those of us who know better are to blame when we give in to poor security practice as if it were a force of nature that cannot be resisted. I revel in my central air during hot summer months; let us provide the information security equivalent to the end users of the world.

nwallette
nwallette

Microkernel/monolithic kernel.. right there with client-side/server-side. BeOS intended to do this. Their goal was to make the kernel small enough so that it could be "essentially bug free." BeOS 6, which was never officially released after Be sold their assets to Palm before going under, was the first release to put networking back into the kernel. The reason? People were very unhappy with the performance of a TCP/IP stack in user-space. I hear there were similar battles at Microsoft, but with things like the printing subsystem and whatnot. Are we at a point now where the processing power has caught up and made if feasible? Or do context switches and things of that nature mean, regardless of raw horsepower, there will always be relatively substantial delays that bring this concept back down from panacea to hard reality? I love the idea in theory, but it seems that the performance metrics never hold up in practice.

robo_dev
robo_dev

The fewer exposed services, the fewer services that can be attacked. And the whole ship depends on water-tight compartments, in effect. Security exploits come from either a flaw in a service, or finding a method for a service or process to break it's security boundaries. (Privilege escalation). My observation is that hardened kernels have very strong and well tested controls to prevent exploitation of services or to prevent a process from breaking into another security layer. They also run only the services and processes that are required, nothing more. Virtual machines, by design, need to have completely watertight barriers or they would not be stable.

apotheon
apotheon

I just wonder if it would be more useful to have a very very machine-oriented kernel handling the actual machine, and leaving the user-oriented stuff to a UI-OS running in a VM. Let's try this: We'll get BSD jails implemented on MINIX 3. We'll run some hypervisor VM host on "bare metal", with MINIX 3 running on a VM within that VM host. Within that, we'll set up a jail. Inside the jail, we'll run our MINIX 3 end user applications. We could actually get more layers in there than that if we wanted to, but at that point we're already getting into the realm of "Why bother?" The layers have to stop somewhere. It's not just turtles all the way down. Many people refuse to consider the option of a microkernel OS for performance reasons. They believe, based on some early microkernel experiments, that any microkernel OS must be too slow. Meanwhile, microkernel OS development has proceeded, and we're getting to the point where pretty much everything runs quickly enough that there's not any detectable difference in performance. There are even microkernel RTOSes. VMs, on the other hand, tend to suffer the actual problem of massive performance penalties. This is because an entire OS is being run in the VM, including a lot of stuff that is already handled by the underlying OS installed on "bare metal" -- the host OS. The host OS provides video card support, for instance, then produces a fake video card interface for the guest OS to think is the real thing and provide its own video card support. Video performance suffers, as a result. You can get much the same security benefits out of a microkernel OS like MINIX 3 as you can by putting something like MS Windows into a VM on the Xen hypervisor. The difference is that the VM approach doesn't actually separate concerns as much as the microkernel OS. To get the same kind of separation of concerns between applications, you'd need to run each application in a different VM guest OS; to get the same kind of separation of concerns between drivers and other low-level constructs, you'd need to run each of those in a separate VM as well. You might as well just run a microkernel OS, and save yourself a lot of trouble and performance drag -- unless you specifically need MS Windows or Ubuntu Linux or AerieBSD or whatever other OS you're trying to run in a VM, for some reason. I guess the short version is that you really can't use a "type 1" hypervisor (a VM hypervisor that runs on "bare metal") with guest OSes as a drop-in replacement for a microkernel OS.

Slayer_
Slayer_

All I got to say to that

apotheon
apotheon

It's already been done, it's called "Linux". Really? Let's compare Slackware Linux, Arch Linux, or Gentoo Linux with PC-BSD some time, and see how well "Linux" stacks up against BSD Unix for user friendliness with that comparison. "Linux" isn't an operating system. If you said "Ubuntu Linux", or "PCLinuxOS", I might have given you a pass. Oh, wait, no I wouldn't. Jaqui was talking about BSD Unix, not Linux-based systems -- so Ubuntu and PCLinuxOS haven't done it either, where "it" is "improve the 'user friendliness' of [BSD Unix systems]". See Jaqui's comment, to which you replied, for reference. A user-obsequious Linux distribution is not the solution to the problem of a user-obsequious BSD Unix system, especially when the latter already exists anyway. There is also a Debian version of BSD. It is set up a little better, but to dumb down the security for dumbed down users, you still lose the major security features. Really? Set up a little better? How the heck is it set up a little better? Debian GNU/kFreeBSD isn't a BSD Unix. It's just a GNU userland on top of a FreeBSD kernel. When you replace the BSD Unix userland with a GNU userland, it's not really BSD Unix any longer. Even worse, it's a more poorly set up system than FreeBSD. The GNU tools are (for the most part) bloated, inconsistent, and at least sometimes downright badly designed. Did you know that the GNU version of ls can't use -D for time formatting because Stallman and MacKenzie decided, in their infinite wisdom, that ls needed to produce output specifically for Emacs use? How asinine a decision is that? That decision, and others like it, result in having to use a cumbersome and feature-bloated --time-style long option for time formatting instead. It's a direct violation of the Unix philosophy that leads down this road, and that's the standard way of crufting-up otherwise standard Unix tools within the GNU project. I find the notion that Debian GNU/kFreeBSD is somehow "set up a little better" laughable, at best. It's set up no better than the Debian GNU/Linux distribution -- which, while better than most Linux distributions, is still worse than any of the three main BSD Unix systems. edit: typo

apotheon
apotheon

the bsds feel clunky when compared to windows/mac/linux systems. Really? In what way? My FreeBSD system seems no more clunky than an equivalent Linux-based system. In fact, it feels less so when I get under the hood and play around with the sysadmin level of things. Superficially, though, from an end-user's perspective, it is only exactly as clunky as a Linux-based system with the same GUI tools installed. You know I like the quality of the bsds, but the typical end user will be like the guy in the other thread " it don't look pretty so it's garbage" How does that work? Do you think the same of "user friendly" Linux distributions? Install KDE, GNOME, Elightenment, XFCE . . . whatever, on both of them, and you get the same look and feel. I don't understand what criteria you're using to judge BSD Unix systems so harshly compared to Linux-based systems. adding in the dramatic differences between the ports system and the more common windows install, the bsds really do not come across as user friendly. You can use a GUI front end to a binary package installation system -- or to the ports themselves if you like. PC-BSD has a PBI package manager that actually mimics MS Windows style installs more closely than anything I've seen on any Linux distribution. When was the last time you used a BSD Unix system? It had to have been before I even started using FreeBSD regularly, because for as long as I've been using it, FreeBSD has been at least as "user friendly" as Linux-based systems, once it's set up for the user.

Jaqui
Jaqui

the actual operating the system you average user could use it, but they would not be happy. the bsds feel clunky when compared to windows/mac/linux systems. You know I like the quality of the bsds, but the typical end user will be like the guy in the other thread " it don't look pretty so it's garbage" ;) adding in the dramatic differences between the ports system and the more common windows install, the bsds really do not come across as user friendly. if you add the install, they are extremely unfriendly. ;)