Linux

My ideal operating system

Jack Wallen asks, "What is the ideal operating system?" What components from the major operating systems would you combine to create the ultimate OS?

Over the weekend I was running my usual route and doing my usual thinking...about Linux. A strange thought crossed my mind as my music-listening-device (not an iPod thank you very much) jumped from one genre of music to another: What would my ideal operating system consist of? While running, a lot of possibilities crossed my brain: Which kernel, which desktop, which multi-media system, which printing system...the possibilities went on and on. But one issue that I struggled with was the idea that I wanted to have SOMETHING from the Windows operating system within my "ideal OS."

With that in mind I sat out to create an OS that included something from all of the MAJOR operating systems. Of course this is just fiction - we all know getting pieces of these OSes to work together just simply won't ever be done. Still, you get to see my take on the ideal operating system. Why don't you chime in and let us know what YOUR ideal operating system would look like.

Kernel: Linux. I have to go this route simply because it's the only kernel of the major players that can be customized. And, in order for this to be an IDEAL operating system, you can bet this kernel would have to be customized. Naturally the idea here would be to avoid bloat. HAL: NetBSD. The NetBSD Hardware Abstraction Layer is one of the cleanest, and most portable HALs around. Network subsystem: Linux. For me this was an obvious choice because of the huge flexibility Linux networking has to offer. And besides, Linux was designed to be online. Printing subsystem: I have to give this one to Windows. The primary reason for this, and it relates to another category Windows owns, is that so many printers are now all-in-one devices. Yes Linux can use these devices, but generally speaking, they can only use the printing system. If I have an all-in-one, I want to be able to use all of the features in my hardware. This was a tough one because with the CUPS system you can easily set up a printer server using Linux. USB system: This one goes to OS X. For many the USB system just works. But the OS X take on USB is the cleanest and most user-friendly available. Hardware recognition: Windows 7. There is very little doubt that Windows offers some of the best hardware recognition out there. And it should, most hardware vendors aren't smart enough to create os-neutral hardware so they create it for one operating system. Desktop: Remember, this is MY ideal operating system. So I am going with the combination of Enlightenment E17 and Compiz (the one used for Elive+Compiz). It's a lightweight, fast, user-friendly desktop that has enough eye candy to not only keep up with the Jones', but (in most cases) blow them away. Multi-Media system (and subsystems): OS X. I will preface this by saying I am not a fan of the iPod. But that doesn't take away from the fact that OS X does have an outstanding system for multi-media. It's a rare occasion you can throw a media file at this OS and not have it played. Package management: Without a doubt this one goes to Ubuntu (or any OS that is based on apt/apt-get). The Synaptic application is one of the finest software installation management tools available. It's simple, reliable, and very user friendly. But I would add, in my "dream os," that all software vendors would bring their titles to the repositories for my ideal operating system. Security: OpenBSD. Without a doubt, OpenBSD has the best security of any operating system that is actually connected to a network. With only two remote attack vulnerabilities found in the last decade, how can you argue with this choice?

And there you have my Franken-OS. What does yours look like?

About

Jack Wallen is an award-winning writer for TechRepublic and Linux.com. He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website getjackd.net.

65 comments
vucliriel
vucliriel

... would have Windows not only of name, but of philosophy, as in 'opening your horizons' instead of the existing jail one has to constantly break out of to make it do what we want it to do! All its applications would be self-contained and completely portable and not install anything in the system. This OS would do the very basic stuff with the hardware and would not get involved with the software like it does now. Its settings would be stored in plain English text files and there would be a single application responsible for translating these text commands into actual computer code. This also means uninstall would be a simple matter of deleting the folder where the application was installed. It would have NO UAC and NO DRM. In other words, ALL rights and responsibilities would be those of the USER and the user only. No Big Brother arrogance like it is now. Of course, the user would be able to save his work where he wants it, on a clearly identified physical medium instead of some vague concept of location hidden under several layers of obfuscation like User\Username\AppData\Roaming\NeverEndingCascadeOfObscucatedFolderNames\Etc... One would be able to specify EXACTLY what features to install and every application, service and snippet of code would be under DIRECT control of the user, not hidden by the OS maker like it is now. For example one would be able to have a fully functional Windows version that would take up no more than 200MB of hard drive space and occupy no more than 128Mb or RAM space when fully loaded AND fully functional. Applications would not require 32Mb of code to do stuff that a 32K app could do instead. In other words dump the wrapper, box and container and just keep the actual code. The OS should do what it was designed to do: operate the hardware for the benefit of the user and his software, without interfering with him like it does now. It?s sad to see that the same people who decried so loudly the hegemony of IBM just 30 years ago (yes, Mr Jobs and Mr Gates, you know I'm talking about you) have become even worse (that same IBM, who, BTW, by its open standards, actually paved the way to the personal computer revolution). These guys should all be put in isolation for a week and forced to watch the movie Tron until they understood what they did.

i.hilliard
i.hilliard

I have always thought of the Windows printing as back to front. This is because Windows formats the display to match what the printer will print instead of trying to make the printers all print the same. It is this backwards and one might say broken approach to printing that has kept Apple as the mainstay of the printing industry.

itchibahn
itchibahn

I've been using FreeBSD for over 10 years, mostly for server applications. IMHO FreeBSD has one the best support even for a newbie, wealth of information, more software packages, and better hardware support. It's much easier to customize the kernel on FreeBSD than OpenBSD. Apple's Mac OS X is based in part on FreeBSD. Little disappointed that FreeBSD wasn't even mentioned... tsk tsk. PCShare.Com

saraboat
saraboat

Simply MEPIS is stable, safe & easy to customize & use. Cliff

pasivemanagement
pasivemanagement

I think Linus is great. This gives for me 100%+ of creativity possibilities. I am free to do what I like to do. Of course I think Linux is not for "dummies", you need to now what you are doing. What do you thin?

kb1493
kb1493

HAL: Interesting standpoint. So tell me what your level of expertise is on HALs and what work you've done with them to be able to make this kind of conclusion? Network subsystem: Debatable. With TCP/IP, Linux seems to be the best performer. Printing: An all-in-one can be easily used with Linux -- with the proper drivers. Therein lies the rub. After all an all-in-one is a combination of a printer and a scanner. Can't be too difficult for someone of your expertise to figure out, eh Jack? USB system: I'm inclined to agree, but with a caveat. With multimedia devices (cameras and such), OS X has appeared to have the upper hand. With HIDs, they're about evenly matched, as Logitech's devices are supported on both Mac and Windows, and I think Microsoft's keyboards and mice are as well (someone correct me if I'm wrong). And there are tons more devices beyond this. For most consumer devices, I would say that Windows and Mac are evenly matched, with the exception of digital and video cameras, in which case the scale tips toward Macs. Hardware recognition: To say that most vendors aren't smart enough is incorrect (and I think that Logitech, HP, and so many other hardware manufacturers would greatly disagree with you). The correct response would be they didn't see a reason to develop drivers for other operating systems. As Windows-based PCs and Macs carry the vast majority of the computer market, these are the systems most likely to be supported first. Package management: This one is debatable, but then I've also written installers for Windows and seen others written for Windows and Mac, so I'm a little biased toward disagreeing with you on that.

jamman647
jamman647

I think this is a very good idea. Combine the best features of all the different OSes into one ULTIMATE package.. I like it. Let's get the programmers on it right away!

bookkeeper
bookkeeper

First of all everybody has something say about just about every OS, with that in mind i wish there was an OS like his I would use it. And for once at least he is thing outside the box for once.

microface
microface

Windows Vista was and is a resource hog, filled with DRM restrictions. Windows 7 XP Mode sucks to the point that I spent 2 weekends getting all my clients obsolete database, and machine interface code working on WINE. Everything works at top speed, as far as my stopwatch can tell, while the Windows 7 XP mode runs at 75% of speed on Windows XP Pro. I tried to keep the hardware as close as possible for the few tests that I ran, but was not able to be exact since Windows 7 would not run on a platform, that Ubuntu and XP Pro ran on just fine.

narea92
narea92

I agree with Torwald, that Linux is getting fat. It should evolve as a micro-kernel for all its inherent benefits. The small speed reduction, of a few percent points if any, would be compensated by the excess speed of today PC's. Gaetano Arena

JohnOfStony
JohnOfStony

What about the [b]filing system[/b]??? I have yet to encounter a current filing system that doesn't overwrite the previous version of a file if you edit and save. Back in 1985-6 I was using the VMS OS on a DEC VAX computer. This had a superb filing system feature in that if you saved a file with the same name as an existing file, there was no "Are you sure?" dialogue box, because it was unnecessary; the OS [b]never overwrote existing files[/b]. Every filename had 3 parts: the name, the extension and the version number. e.g. fred.txt;12. If you edited fred.txt;12 and saved it, it would be saved as fred.txt;13 so previous versions were available should they be required. To keep storage requirements down - and in the mid-80s storage was limited and expensive - there was a command "purge fred.txt" which would delete all but the latest version, or, if you preferred a few previous versions to be kept, you could "purge fred.txt /keep=3" which would keep the last 3 versions. If you wanted a listing of the contents of a directory (we didn't call them 'folders' then), you only got the name and extension of each file once, regardless of how many versions existed. You could, of course opt to list everything by adding a parameter to the "dir" command. With todays ultra cheap huge storage, this sytem should be standard in all OSs.

JohnOfStony
JohnOfStony

What about the [b]filing system[/b]??? I have yet to encounter a current filing system that doesn't overwrite the previous version of a file if you edit and save. Back in 1985-6 I was using the VMS OS on a DEC VAX computer. This had a superb filing system feature in that if you saved a file with the same name as an existing file, there was no "Are you sure?" dialogue box, because it was unnecessary; the OS [b]never overwrote existing files[/b]. Every filename had 3 parts: the name, the extension and the version number. e.g. fred.txt;12. If you edited fred.txt;12 and saved it, it would be saved as fred.txt;13 so previous versions were available should they be required. To keep storage requirements down - and in the mid-80s storage was limited and expensive - there was a command "purge fred.txt" which would delete all but the latest version, or, if you preferred a few previous versions to be kept, you could "purge fred.txt /keep=3" which would keep the last 3 versions. If you wanted a listing of the contents of a directory (we didn't call them 'folders' then), you only got the name and extension of each file once, regardless of how many versions existed. You could, of course opt to list everything by adding a parameter to the "dir" command. With todays ultra cheap huge storage, this sytem should be standard in all OSs.

paul.froehle
paul.froehle

How do you handle cut/paste between applications in your system? This is the main objection I have to changing from Windows.

lastchip
lastchip

Let's not contaminate this super system with Windows. While I can see your argument is sound, it would be far better to educate the hardware providers. HP for example, has made huge strides in this direction. All-in-all though, a good choice.

JulesLt
JulesLt

A few more things I'd take from OS X - the app programming framework (Cocoa) is superb, although I'd put Qt/KDE as an increasingly close second. I could live without Obj-C, so I'd be going with completing a binding to another language. Grand Central Dispatch and Clang/LLVM (both of which are open source so possibilities - but are more likely to reach BSD first). I'm very intrigued about the Etoile project (which is a modern re-imagining of NextStep/Cocoa, filtered through David Chisnall's annoyances with Unix) - a key point being that in Unix everything is a file - what would an operating system look like where everything is an object? And where every object has history. From Solaris - dTrace (already ported to BSD/OS X) and ZFS - as we enter a realm where storage is likely to be spread across 'fast' flash and slow (and possibly remote) volume storage, I want an abstraction from it. One other thing - Etoile's support of Smalltalk as a first-class language on the GnuStep runtime, and Apple's increasing support of Ruby on the Cocoa runtime, also makes me think of something I would take from Window's - the concept of the CLR. The idea of OS level language independent objects - which can be extended using the language of the developers choice - is a powerful one, and seems to be where things are heading.

ruel24
ruel24

Just be aware that security is mostly through obscurity in OS's other than Windows. How many attacks are actually thrown at OpenBSD? Also, I'm glad you actually qualified your credit for Ubuntu having Apt-get/Synaptic. Apt-get is the product of Debian and Synaptic was the product of the former Connectiva, which was RPM based, and which was bought by Mandrake, making the company Mandriva. Just wanted to give credit where it's due, since Ubuntu seems to be getting credit for everything Linux related, these days. Also, I would like to see the kernel act a little like Windows, in that I can boot up a 2001 released version of Windows XP on my current system, which is based on the Intel X58 chipset with a Core i7 processor. I can't boot anything Linux that doesn't have, at least, kernel 2.6.28. Why not? If Microsoft can do it, so can Linus Torvalds and Co..

Neon Samurai
Neon Samurai

Not a point that changes your OS assembly but it would have been more accurate to give Debian credit for the package system as dpkt/apt/aptitude/synaptic are inherited by Ubuntu from Debian. This is probably more my focusing on the parent rather than the fork distribution where possible though. In terms of the actual content, it does sound like the best attributes from each platform. Combining best user habits from each platform has been nothing but beneficial for me and applying that same approach to the actual platform would be a fantastic reason for a test VM.

TrajMag
TrajMag

Having also spent considerable time at the VMS DCL prompt. IMHO we almost had an OS that was first class. DEC started the port to VMS to the Intel platform. Before it was complete they sold out to Compaq who swore they would continue. No dice and the Free VMS org has not completed either. VMS is rock solid, almost too secure (better not forget or loose your log in and have permissions) and fast. It just needed applications for the PC. edit - typos

yattwood
yattwood

I recall the feature you describe; I was called to one of the laboratories at the aerospace company I was working at, to install Oracle, and when I walked in - there was a DEC Alpha, running VMS. I had never used VMS in my life, much less install Oracle on VMS - fortunately, another DBA had the lifesaving book: "VMS for UNIX Users" - I looked up the familiar UNIX command, and there was what I needed for VMS. I recall that this versioning feature came in pretty handy, then. My current shop is Windows/AIX/Solaris/HP-UX; the closest thing to this feature that you've described is the snapshot feature on a Network Appliance....although the NetApp doesn't change the actual file extension, but that of the snapshot, and you are limited in the number of snapshots you can keep by the amount of disk and your snapshot reserve....

Neon Samurai
Neon Samurai

Well, I'll have to confirm if osX on the iPhone supports cut/paste without a third party utility installed. Desktops and servers all have no issue with that function regardless of installed platform.

zclayton3
zclayton3

cut and paste is a matter of application implementation. the OS should, and for the ones he mentioned, be transparent to the user for this. cut and paste works in all of them so I do not get the point or concern of your question.

TNT
TNT

Bring back the Be, I agree It's multi-threading base leaves others in disgrace Its quick and stable, and immanently capable

Neon Samurai
Neon Samurai

What distributions based on Linux where you trying to use with your i7? It's not impossible that the kernel has issues but an Intel processor is a very strange place to have such incompatibility with. Maybe the x58 chipset but that should be with extra functions rather than the core general functions. Here are Ubuntu 8.10 benchmarks. I'm not sure if there was anything special required to get it working. http://www.phoronix.com/scan.php?page=article&item=intel_core_i7&num=3 CentOS 4.7 and 5.2 and Red Hat Enterprise 5.2: http://www.cyberciti.biz/tips/linux-kernel-intel-core-i7.html eRack's Core i7 with around 33 available platforms including various Windows, BSDs and Linux based: http://eracks.com/products/config?sku=i7CORE&session=34 Do you have links to articles or forums where people are having trouble? What distribution are you trying personally? I'm curious about this as I expected google to return all kinds of forum posts from "problem running linux on core i7 nehalem".

JulesLt
JulesLt

Unix - including BSD and Linux variants - has spent decades running in what could be considered a very hostile environment (i.e. the world's computer science departments). Equally, back in the day, the only systems that were on-line tended to be mainframes and minis (generally running IBM or VMS OS, later Unix) - and many of these were hacked, despite their relative obscurity and the comparatively tiny number of skilled hackers with the required equipment. Windows - and the early Mac OS - were 'broken by design' in terms of security. Summary - they were originally designed for single user personal computers - Unix was designed, from the start, to support multiple users, while minimising the ability for one user to affect another. Maybe you should read up a little on operating system design, before mouthing off?

Neon Samurai
Neon Samurai

Do you understand security through obscurity? It does not relate to the number of attacks one is targeted with but the actual security implementation itself. Security through Obscurity is assuming your software is safer because people can't read it's source code. They can't see how you wrote it so it must be more secure. Accept that this hides flaws from peer review which can provide safer software by identifying them. This also has a very short lifespan because as soon as someone finds a flaw, your protection through obscurity finished. In this specific case, the software used to find such flaws works against the compiled program anyhow so the obscurity of hiding your source code is not relevant. Another specific example is encryption were the general idea is that good cryptography should remain unbroken given the attacker knows everything except the actual key. This is security through visibility. If I know that AES is being used and how AES works I still won't get the data because I don't have the encryption key. I can have the stream of data even and it's no good to me. By contrast, Obscurity is trusting an encryption system which may have statistical flaws and weak keys known to the attacker though the end user and peer review is blocked from identifying the problems in the mechanism. Market share is not a measure of software's technical qualities such as how securely it is designed and how quickly it response to discovered flaws. Obscurity within the market may be a measure of business success if we where discussing marketing though. Any machine on the internet is getting the general attacks thrown at it. The important part is what the success rate of those attacks is. Any machine specifically targeted will get custom crafted attacks based on it's specific platform. Obscurity does not provide security. For your processor, Linus and co have done it as you state, with kernel 2.6.28. The distributions may not have picked that up so the blame falls to them for still using kernel 2.6.28. You can compile the later kernel yourself of course and there may even be general or specific distributions which are including 2.6.28 or specifically i7 support. Do you really want to compare the list of processor types that one kernel will run on while the other won't?

ruel24
ruel24

No, I have Linux running fine on my Core i7 - several of them. That's not the point. When a user upgrades hardware, or a potential new user to the Linux platform with very new hardware attempts to install Linux, it can be a hit and miss ordeal. In order for me to install Linux on my machine, it has to have kernel 2.6.28 or newer. Now, take this discussion back about a year or so, where the Core i7 is launched and support isn't in the kernel, yet, for the chipset. What's a user to do? However, a Windows XP disk bought in 2001 (an eternity in the tech field) can not only boot, but install on the system. The installation disk does not have chipset drivers for the X58 chipset, nor does Windows search the internet for drivers to install on the system. Yet, it works. Linus Torvalds and his army of developers form various companies around the world are absolutely brilliant. However, why is this still an issue after nearly 20 years since the Linux kernel was originally developed? It's becoming less of an issue as drivers and newer kernels are coming out with support at a much faster rate, thanks to the involvement of Intel, but it's still not there. Linux distros often release in 6 month or longer cycles. This means, when the next 6 core or 9 core Intel processor is released, with a newer chipset, there will be a lag time when you can install a Linux distro on the hardware. Now, there are workarounds, such as installing the system on another system or through a virtualization product, updating the kernel, then remastering the LiveCD to install it. It's been done and suggested by many to those who need a newer kernel than is available on the installation media of a given Linux distro. However, even when this new platform gets released, users will still be able to install Windows XP, or maybe even Windows 2000 on it, despite there being a total lack of drivers and support built in for the chipset.

Thack
Thack

I know you weren't having a go at me, but I would just like to pick you up on this: "Windows...... [was] originally designed for single user personal computers." I'm not sure that is really true of the NT family of Windows (which we've all been buying since XP). Dave Cutler of VAX VMS fame was the architect of NT, and he was immensly experienced with "grown up" multi-user operating systems. I do agree, though, that there were some pretty big cock-ups post-Cutler as Microsoft moved NT forward. Like making the XP user default to being an Administrator! That single mistake must have caused an enormous amount of heartache for MS and their users.

ruel24
ruel24

Security through obscurity simply means that something isn't visible enough to be attacked. It means that an OS isn't popular enough to be a big enough target to attack by hackers. The fact is, that all operating systems other than Windows enjoy this status. Simply put, Windows gets attacked more often, and the attackers are getting more and more clever as they attempt it. They're familiar with the platform. Linux, the BSD's and other Unices including OS X, VMS...they all don't see the number of attempts, nor the attacks on the level of complexity that Windows does because attackers aren't interested in writing code to attack an OS that has little market share, and they simply aren't familiar with it as much. The commonality of Windows makes it a bigger target and rich with people who know their way around it. Am I claiming that Linux, Unix, or VMS is less secure? Not in any way. I'm saying that to make bold statements about an OS being so secure because there have been a limited number of successful attacks is naive because there are also less attempts to attack it. As far as the kernel goes, you seem to have missed my point. I can boot a Windows XP disk I bought in 2001 on my current system. I don't need a newer kernel, despite that kernel predates my hardware by 8 years. Why can't the Linux kernel do the same? Why do we have to have a kernel new enough to support the hardware to boot on the hardware? This is a frustration among new users with new hardware. Like I said, if Microsoft can do it, then the excellent developers working on the Linux kernel should be able to do the same.

Neon Samurai
Neon Samurai

"Windows, with a kernel developed some 8 years before the new hardware was released, is still bootable and installable on it, despite having no driver support for the chipset. Yet, a much newer Linux distro, with a much newer kernel chokes." "Again, this is a shortcoming of Linux, and something the kernel should strive to be able to do in the future." We don't have enough evidence to state that it's a shortcoming of the kernel. That's why I ask if you have further information on the issue. Was it due to something lacking in the kernel or a kernel developer decision to exclude it? Is there something in the X58/i7 design that used a Windows only hardware call that had to be discovered and added to the Linux kernel? If Intel been testing it's processor against the Linux kernel as they very likely where testing it against the NT kernel, would they have adjusted something in the chip or provided the needed changes to the kernel developers previous to release? If it really is a design decision in Linux then I'll be right there beside you pointing at it and hoping the kernel developers can avoid it in future releases. If it is a design decision by Intel then I'd rather direct my complaints at them to avoid the issue being repeated in the future. I feel the same way about hardware drivers on my Windows box. If hardware vendor written drivers cause Windows to crash on me. I want to hold the hardware vendor responsible for writing better drivers not blindly blame Windows. If a third party application opens my Windows OS up to security vulnerabilities then I want to hold that third party developer responsible. If hardware instability or vulnerabilities are the result of something in Windows itself then I want to hold Windows responsible in hopes that it won't happen in the future. I can't simply say WinXP ran clean on X58/i7 hardware while Linux distributions released more recently didn't is evidence that Linux is broken. "Mandriva 2009.1 Spring is the latest and has X58 chipset support, but it won't be bootable on a newer chipset, when it arrives" This analysis is also premature. Chipset and processor have tended to be an area of stability for the Linux kernel. They build on top of industry standards so it's odd that they would cause issues. If every new chipset and processor update broke the kernel then this would be more accurate but X58/i7 being an issue does not insure that future shipsets are going to have issues. So, let's get back to actually understanding what the issue was before placing blame and predicting the future. Do you have links to article or forums discussing the X58/i7 issue which I can read to understand what information is already known including why there was an issue?

ruel24
ruel24

I've got Mandriva, as well as Linux Mint installed. I'm not trying to get any distro installed, per se, but just making a point about how Windows, with a kernel developed some 8 years before the new hardware was released, is still bootable and installable on it, despite having no driver support for the chipset. Yet, a much newer Linux distro, with a much newer kernel chokes. I just threw specific releases of Linux distros to make a point that Ubuntu 5.10 was release several years ago, yet is newer than my copy of Windows XP by a few years, and Mandriva 2009.1 Spring is the latest and has X58 chipset support, but it won't be bootable on a newer chipset, when it arrives. It could be any distro released in those time frames. I wasn't trying to install anything. I understand the point that you're trying to make that Intel purposely builds the next generation of processors to be bootable by an older version of Windows, but it still doesn't explain why the Linux kernel can't tap into that very same resource. Again, this is a shortcoming of Linux, and something the kernel should strive to be able to do in the future.

Neon Samurai
Neon Samurai

You assume Windows had to be developed in some way that allowed it to install on any chipset versus the chipset being developed specifically to work with Windows. I may be missing your point which can easily be solved with the clarity you finally provide. You may be ignoring the point that the kernel developers often have to provide support for hardware with absolutely no information from the hardware manufacturers. They can't give you bleeding edge support in cases where they can't develop the support until after the hardware is on the market and into there labs. When they have been given the bare minimum, interface specs, support has developed very quickly (Bluetooth and USB2 both being examples). In general, the kernel can be run on any chipset and processor that uses industry standards. If it's x86 command set based then there should be no issue so what is so different about the i7 processor and x58 chipset? The part that interests me is the question of why. Why will the i7 boot WindowsXP where another OS had to adjust before it was supported by the processor. Maybe the fault is in the kernel, maybe Intel insured the chip was compatible with Windows code not available in the. I'm not willing to jump to either conclusion just to forward a personal opinion. So, to answer my first question in trying to clarify your issue. Is it Mandriva 2009.1 that you are trying to install which will not boot? Can you direct me to forums and articles where this issue is discussed in further detail so I can read what information is already known? I'm running a humble Q6600 processor so I've not had reason to look at support for a other chips yet. Since you've got me curious about this now though, let's suspend final judgment and look for what the difference is and what was required to adjust to that difference.

ruel24
ruel24

I just got the Core i7. It seems you still keep missing the point. The point is not whether there is support for the Core i7 in the Linux kernel, or whether it was there in 2.6.26. It doesn't matter. Try and boot Ubuntu 5.10 on a Core i7. It won't happen. Ubuntu 5.10 is newer than Windows XP circa 2001. However, That 8 year old Windows XP install disk will boot fine on that Core i7 chipset. It'll probably boot fine on Core i9 processors and the X68 (or whatever the name of it is). However, Mandriva 2009 Spring is NOT going to boot on it. It won't install...nothing. Intel did not provide any means of hardware support for a non-existent chipset when Windows XP was launched. The Windows XP disk DOES NOT have any drivers for it when it boots and installs just fine. However, not one single Linux distro prior to whatever kernel began supporting the chipset will even boot. Why not? Why can't the kernel developers make it behave in the very same way, where it can boot and install on virtually any chipset made, without any means of support from the chipset manufacturer, just as Microsoft has done? On future hardware releases, this will save users a lot of frustration.

Neon Samurai
Neon Samurai

"Also, I would like to see the kernel act a little like Windows, in that I can boot up a 2001 released version of Windows XP on my current system, which is based on the Intel X58 chipset with a Core i7 processor. I can't boot anything Linux that doesn't have, at least, kernel 2.6.28. Why not?" Isn't this saying that your current system is an x58 chipset with i7 processor and that winXP can boot on it but Linux based systems can't? Or, that Linux can only boot on it if you have kernel 2.6.28 or later? I'm seeing reports of 2.6.26 working with it along with several different distributions that happen to be based on the Linux kernel. What distribution is it that requires you to have 2.6.28 or newer? I'm actually trying to look into this further to understand the reason why that may be. If we're just going to "go back a year" and cherry pick examples then can I simply say that my Geforce 8800 doesn't work with win98 so therefor it's crap? If you go back a few years Linux ran perfectly well on ARM processors but winXP didn't; is that remotely relevant now though? "Linus Torvalds and his army of developers form various companies around the world are absolutely brilliant. However, why is this still an issue after nearly 20 years since the Linux kernel was originally developed?" Because they are required to provide support while wearing a blindfold and with both hands tied behind there back. When barred from seeing the hardware interface specs or being provided drivers by the manufacturer, they must reverse engineer hardware support. Where vendors have provided interface specs, support is there seamlessly or under fast development. Where vendors have provided drivers, support is there (nvidia). Where vendors have actually contributed to the open source drivers it's there or developing quickly (AMD/ATI). Where vendors have targeted only windows native support while ignoring all other platforms, suppor can only come as quick as it can be reverse engineer (broadcome wireless). The problem is not the kernel or Xorg developers but the hardware manufacturers. It's also not about an issue you had a year ago but which has already been resolved. The developers have all including begged for the minimum information to build in support for new hardware. Also, distributions, as seporate products provide different degrees of hardware support. If teh distribution you are using doesn't support your hardware then maybe you should look at one with better support. If Intel did the responsible thing and provided specs to the kernel developers at the same time they where developing the drivers for Windows; it would be a non-issue. But, let's blame the second class citizen kernel for the hardware vendor's disregard shall we? I am still interested though, a year ago what distribution and version was it you had such issues with?

kmdennis
kmdennis

>>>Are you saying that there are not buffer overruns to exploit in Linux or Unix? Are you saying that Linux and Unix are foolproof? You're naive... It's a simple fact that the more attempts at attacking an OS, the more likely someone will find a vulnerability and exploit it.

kmdennis
kmdennis

Well I was about to jump in the fray here but I see that you did a great job of clearly explaining the differences with really good examples. I am hoping that I do not read further and he still argues. I really hope that cleared it up for him.(. Very well done!

FXEF
FXEF

I run both Windows and Linux boxes on my home network and must say that that Linux is more secure with less hassle. Both Linux and Windows are exposed to the _same_ malware and attackers. Needless to say Linux has never been infected in anyway, however I can't say that for Windows XP. XP has weak security but Vista is a little better due to the new structure and UAC. Linux is my OS of choice.

phill.erasmus
phill.erasmus

You know I did not say so - So why do you ask the question? You said, and I quote: "Security through obscurity simply means that something isn't visible enough to be attacked. It means that an OS isn't popular enough to be a big enough target to attack by hackers....." and I responded the "Security Through Obscurity" is a popular myth used by Windows users to defend Windows plus I pointed you to a URL to prove my point. So please do not make up your own stories as to what I have said but stick to my original response.

Neon Samurai
Neon Samurai

The number of attempts against a system is not a measure of it's security but how it responds to those attempts. OpenBSD is not an obscure operating system in the server space yet it resist the pummeling it gets. Say we both have lovely buildings with honking big walls around them. I have 100 people trying to get over my wall and you have 1000 people trying to get over yours on an hourly average. About 60 people get over my wall while 100 people (10%) get over your wall each hour. Your building and protective wall is more secure than mine. It's not about the number of attempts but number of successful attempts. OpenBSD is not invulnerable but one should consider that with it not being unpopular (servers, network appliances, firewalls, high value targets like enterprise data centers), the lack of remotely exploitable vulnerabilities is a heck of a lot better than Windows, osX and the various Linux distributions showing higher successful breach rates. It would be naive to assume an OpenBSD system won't ever get broken into but you can't ignore it's security performance so far either.

ruel24
ruel24

Are you saying that there are not buffer overruns to exploit in Linux or Unix? Are you saying that Linux and Unix are foolproof? You're naive... No one ever stated that Windows is more or less secure than any other OS. I simply stated that someone is very naive to make such a bold statement that OpenBDS is so secure because of only 2 successful remote attacks. How did this thread even become a debate on who's more secure? It's a simple fact that the more attempts at attacking an OS, the more likely someone will find a vulnerability and exploit it. Nothing is perfect, and nothing is foolproof. I clearly stated that, by design, I feel that the proper user/administrator accout system in addition to the more comprehensive permissions system employed by every OS other than Windows is inherently more secure by design than Windows. However, again...don't be smug to make bold claims of OS security, when the number of attempts at attacking are dwarfed by the number of attempts at attacking Windows.

phill.erasmus
phill.erasmus

This is a rubbish myth that is being used to defend Windows weakness in being a target for attacks. I point you to the following website and quote part of their response to this myth: http://www.theregister.co.uk/2004/10/22/security_report_windows_vs_linux/#myth1 "Busting The Myths Myth: There?s Safety In Small Numbers Perhaps the most oft-repeated myth regarding Windows vs. Linux security is the claim that Windows has more incidents of viruses, worms, Trojans and other problems because malicious hackers tend to confine their activities to breaking into the software with the largest installed base. This reasoning is applied to defend Windows and Windows applications. Windows dominates the desktop; therefore Windows and Windows applications are the focus of the most attacks, which is why you don?t see viruses, worms and Trojans for Linux. While this may be true, at least in part, the intentional implication is not necessarily true..... This reasoning backfires when one considers that Apache is by far the most popular web server software on the Internet. According to the September 2004 Netcraft web site survey, [1] 68% of web sites run the Apache web server. Only 21% of web sites run Microsoft IIS. If security problems boil down to the simple fact that malicious hackers target the largest installed base, it follows that we should see more worms, viruses, and other malware targeting Apache and the underlying operating systems for Apache than for Windows and IIS. Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities. Yet this is precisely the opposite of what we find, historically. IIS has long been the primary target for worms and other attacks, and these attacks have been largely successful. The Code Red worm that exploited a buffer overrun in an IIS service to gain control of the web servers infected some 300,000 servers, and the number of infections only stopped because the worm was deliberately written to stop spreading. Code Red.A had an even faster rate of infection, although it too self-terminated after three weeks. Another worm, IISWorm, had a limited impact only because the worm was badly written, not because IIS successfully protected itself. Yes, worms for Apache have been known to exist, such as the Slapper worm. (Slapper actually exploited a known vulnerability in OpenSSL, not Apache). But Apache worms rarely make headlines because they have such a limited range of effect, and are easily eradicated. Target sites were already plugging the known OpenSSL hole. It was also trivially easy to clean and restore infected site with a few commands, and without as much as a reboot, thanks to the modular nature of Linux and UNIX....."

Neon Samurai
Neon Samurai

What is the percentage of attempts versus successful attacks? What is the time between newly discovered vulnerabilities and distribution of the fix? What is the severity of discovered vulnerabilities? One can be pretty confident that people who have the skill to write new attacks for Windows can easily get an equally good understanding of other platforms. It's note someone's going to write a buffer overflow for Windows and not be able to learn osX or Debian. People pointing out flaws in Windows with the hope that those flaws will be fixed are not doing so because they are only or more familiar with Windows. Given the developers that Microsoft has access too, the success rate of attacks should be much lower. I don't mean in terms of counting 100 cases of the same successful breach, I mean in terms of counting BreachA, BrechB, BreachC. I also don't mean OpenBSD is better because it's only had two remote vulnerabilities, I suggest that in markets where OpenBSD competes directly with Windows but is the majority share or more popular target; it still resists the attempts. It would absolutely be naive to based a security assessment on one metric like how many exploits in it's lifetime. It's equally naive to assume that the only reason for Windows increased failure rate in resisting attacks is because of popularity. It does not relate to how security it is, only how many attempts it receives. A secure system should be able to take a thousand different attempts a minute without that popularity making a higher success rate acceptable.

Neon Samurai
Neon Samurai

It's another matter of correct terminology. You earlier mentioned "professional criminals" which is absolutely right then later suggest "obvious target for hackers and attackers". You've stated "attackers" already so there is no need to lump in "hackers" which are primarily ethical contrary to popular media like Fox News. This suggests that there is something inherently wrong with anyone who is enthusiastic about hacking away at there own thing be it computers, cars, radios, gardening or cooking. I do get the overall idea though. I'd love to see more real world tests like the Pwn2own contest where osX has proven less secure than both Linux or Windows. OpenBSD also has a proven security record with something like only two remote exploits in it's lifetime. This is a major platform that runs many serious servers connected to public networks. I simply see popularity as being related to the number of attempted attacks where security is related to how a system resists and responds to those attacks. Obscurity I see as something to consider potential icing on top but not something to include as a core security mechanism. Microsoft actually provides a great description. I'd not have stumbled on this if I wasn't looking in response to the earlier posts so I'm happy for this thread. "Security by obscurity, on the other hand, involves taking some measure that does not stop the attack vector but merely conceals it. For example, you may decide to move the Web server to port 81 instead of 80 so only those who know where to find your Web server will be able to do so. Or so that argument goes. In reality, moving your Web server to port 81 stops only some attacks, and mostly just inconveniences the end user. A competent intruder would simply run a port scanner and a Web banner grabber against a large number of ports to discover Web servers on non-standard ports. As soon as he finds one, he can fire off the exploit against your server because you did not actually eliminate the attack vector, you merely (temporarily) obscured it. Does this mean you should not even try? The answer is that it depends. As with everything else in the Information Security field, it all comes down to risk management. To understand the key factors to consider, we will take a quick look at a few more security by obscurity measures and then discuss one?renaming the Administrator account?in more detail." http://technet.microsoft.com/en-us/magazine/2008.06.obscurity.aspx?pr=msdnblog ".. because you did not actually eliminate the attck vector, you merely (temporarily) obscured it." I stick to something I've said before. Obscurity is only valuable to the security of the attacker. When someone sneaks into your home, obscurity may delay there detection; it's mis-direction or stealth. From the defender's point of view, your obscured home alarm system is useless unless it's the neighbor's kid that snuck in because anyone else is going to take the time to look for how the system is installed. Obscurity provides you a very temporary and easily negated advantage. Hiding the house key under the flowerpot is not a good long term plan. Once that advantage is gone, your screwed unless you have some very real security mechanisms in place like strong encryption, a large mean dog or trained zombie ninja monkeys. In markets where non-windows systems are more dominant, we're not seeing an increase in successful break-ins although there is an increase in popularity. Actual security still comes down to the rate of successful attacks not the overall number of attacks. Obscurity does not really apply at the user level either. All platforms have users that make bad decisions. Since the user groups are not evenly populated across platforms, we have to take an equally skilled population of users from each platform. - social engineering can't be helped, it's a user problem be it fraudulent email, fraudulent websites or active contact over some other medium. - how well do the platforms separate one user from another on the same system; we can compare this - how well do the platforms separate privileged levels; we can compare this - what further mechanisms are in place to protect the user from themselves and how effective are those? Compatible also as a form of UAC exists on all. Again though, I don't see the obscurity playing nearly as large a roll over all. obscurity is not about how secure a system is when attacked, it's limited to only how many attacks are attempted. It's not safer to use a more obscure system if that system is wide open to exploitation.

Thack
Thack

Yes, and it's probably worth pointing out that although most of the Internet's servers (and many servers belonging to big organisations) run some form of Unix, they are nearly all administered by highly technically savvy IT people. This is in distinct contrast to most home PCs. This is another reason why Unix is attacked less often - the attackers know they are up against people who know as much, or more, than the attackers themselves.

ruel24
ruel24

Again, I did not claim that Unix and Linux are less secure. As a matter of fact, I think, particularly pre-Sudo (not a Sudo fan), Linux is far more inherently secure than Windows by use of proper permissions and user/administrator accounts. However, nothing is foolproof. But, to simply make a blanket statement that something is secure because of only 2 successful attacks, as in OpenBSD, is naive. Windows sees so many more attempts, and because there are so many people with malicious intents familiar with the platform, the attacks are both eloquent and frequent. Other OS's have not seen this intensity of attacks, and therefore, we should remain more muted in claims of just how secure they are.

Thack
Thack

We could end up arguing about the term "security through obscurity" and miss the point. My view is similar: we don't know how good OS X, Linux, etc, REALLY are in terms of security because it simply isn't worth the "professional criminals" investing that much time and effort into cracking them, when the vast majority of computers on line at any particular time run Windows. Furthermore, Windows is often run by people with less technical savvy than (say) Linux, so the attackers can be reasonably confident that a large number of Windows machines are using default settings in most areas. (That must be true of OS X, too, I would guess). Thus, Windows is the obvious target for hackers and attackers. I'm NOT claiming that OS X or Linux are any more, or less, secure than Windows. I'm simply saying nobody really knows, because they haven't been targeted with anything like the vigour of Windows.

Neon Samurai
Neon Samurai

"Security through obscurity (sometimes called security by obscurity) is a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security." http://en.wikipedia.org/wiki/Security_through_obscurity "which attempts to use secrecy (of design, implementation, etc.)" The engineer intended mechanism providing security is keeping how it secured secret. "no one can break my encryption because I didn't tell anyone what algorithm I'm using and I can't see any exploitable vulnerabilities in it." "It means that an OS isn't popular enough to be a big enough target to attack by hackers." "ism't popular enough" is a measure of business success within the market not of any technological attribute including security. Sealing your mobile phone inside a concrete block is not more secure than putting it in a backpack because there are fifty other backpacks with mobile phones in them. It's more secure because a concrete block is far less vulnerable to exploits (eg. unzipping the ziper, cutting the fabric). Given an equal number of attacks, the more secure item will be unsuccessfully attacked more often. A criminal hiding from police is obscurity, he's only safe until they look under the bed and spot him. By contrast, being caught red handed and claiming diplomatic immunity is much more secure because even after being caught, the police can't do anything. I also don't believe it's "hackers" who attack an OS because it's popular enough. I believe it's criminals who attack systems without prior permission. Hacking culture tends to be very respecting of personal property, privacy and licenses/laws. Security and computer enthusiasts are also one small part of many different areas of interest that people apply hacker mind set towards. If the better security of Linux, Unix or VMS is due to there being less attacks, what of the server market where these systems are not the minority but still retain the higher resistance to attacks? Shouldn't OpenBSD fall over if you sneaze at it what with it's security being due to unpopularity? This type of thinking ignores other important considerations like how quickly a platform responds to discovered security issues. I don't believe these systems are involnerable but historically they have responded faster than other platforms and are just as likely to continue the rappid response should popularity continue to increase. As for your processor; with Intel's general backward compatability I can't see why one platform would support it while the other does not especially if no third party driver install was required for Windows. One can't deny that it has the benefit of first pick when hardware vendors choose what platform to support. It's an unfair advantage but an advantage none the less. That was my mistake though, I though you where comparing newest platform versions. My curiosity needs to go do some reading on that issue though as my first guess would be an issue outside of the processor like the chipset being designed to play nice with Windows API calls rather than industry standard compatibility. With less backward compatible hardware the usual issue is a vendor who chose to support only one platform leaving the other platforms to reverse engineer support for the hardware interface. It's a reason I learned long ago to check platform support before buying hardware. (A processor though, odd)

j-mart
j-mart

You don't actually work in the IT industry, so it's not surprising that you seem to have limited knowledge. If you wish to understand more about OS's and and security you should do some research and read up on some of he many blogs etc by some of TR regulars who make a career of securing computer systems.

Editor's Picks