Maybe I’m just being cynical, but I fail to see why Apple’s decision to move to the Intel architecture is such a big deal. Let’s get real here folks, it’s simply a change in CPUs.

OK, so maybe there’s more to it. So the Macs may drop a touch in price, and/or get a bit faster. Personally, I’ve been wanting to get a Mac for a while, ever since I put together a server on BSD. I loved BSD’s reliability and speed, especially compared to the Windows 2003 Enterprise server that it replaced. Put simply, BSD rocks, and the idea of using a computer built on BSD, plus a usable GUI (call me crazy, but I think X Windows, at least in every X environment I’ve worked in, is total garbage to deal with, just a fancy way of handling multiple shell sessions) would be really nice. When Apple announced the Mac mini, I was extremely happy, and started saving my pennies. Sure, I could get a pretty decent PC for the same money, but I’d still have the same PC problems. And frankly, the perceived lack of software for Macs doesn’t bother me too much, because my home PC doesn’t do anything that I can’t do on a Mac or a BSD system. So I was already prepared to take the Mac plunge. I’ve even downloaded installers for all of my common software, just wanting to FTP it over the moment I get it plugged in.

Am I holding off on getting that Mac mini because I want Intel. Heck no. I’m holding off for the same reason my home PC is an Athlon 1900+ with 256 MB RAM: I have higher financial priorities at the moment. But I’m looking forwards to it.

When Apple announced that they would be switching to Intel chips, everyone made such a big deal about it. I didn’t see the cause for the hoopla then, no do I see it now. OK, maybe IBM didn’t have the roadmap for the G5 that Apple wanted, and maybe they were having some bad karma with Motorola. But at the end of the day, none of my Mac using friends ever complained about having a slow Mac, they are all delighted with it. The current G5 chips are good enough for now, and moving to Intel is a simple business decision to keep the future as bright as the present.

So I started asking around, trying to find out what the big deal is. My Mac friends could not care less. As long as they’re using Mac OSX, they don’t care if there are ferrets running inside the box handing delivering piece of paper with ones and zero written on them to each other. They just love the OS. My PC friends are all delighted because they have these grandiose dreams of dual booting.

Sorry folks, I’ve been down the dual boot route. NT4 & Windows 95. OS/2 Warp and Windows 3.1. Windows 98 & BeOS (yes, I tried BeOS, loved it to death, no applications or drivers, sad to say). Indeed, BeOS was originally designed for the PowerPC, then ported over to x86 when Be couldn’t sell any of their boxes. I did Windows 98 and Windows 2000 for a while too. But at the end of the day, I always despised dual booting. Life is always more miserable when you dual boot. Many advantages of each OS are tied to the file system. NTFS is the cornerstone of NT/XP’s security system. HPFS was integral to OS/2 Warp. And I’m sure that HFS+ plays a large role in OSX’s capabilities. Yes, I’m aware that many of the OS’s I’ve listed can read NTFS. But they can’t write to NTFS. No Microsoft OS reads or writes to anything other than FAT16/32 and NTFS (actually, NT 4 may have been able to handle HPFS if memory serves). The point is, you’re going to have a situation where you’re going to end up sticking a giant FAT32 partition somewhere in your system for your common data files, plus have two more system partitions, one for NTFS and one for HFS+. And to be honest, I 100% hate that idea. I jump through hoops to only have one volume mounted in my system, I don’t like dealing with drive letters (I know that MacOSX doesn’t use drive letters). It’s a pain in the rear to have to figure out which directory a file goes into, not just based upon its contents, but also upon which directory or volume or whatever has space remaining.

Plus, dual booting is a huge waste of time and interupts my workflow. People buy faster computer parts because they don’t want to wait thirty seconds to two minutes starting an application. But with dual booting, this is exactly what you’ll be doing. Need an application that runs on the OS you’re not currently working with? Well, you get to stop EVERYTHING you are doing and reboot. Heaven help you if you miss the boot loader and end up in the wrong OS. Furthermore, isn’t one of the reasons why we like our newer operating systems is because we have to boot less often? Every new version of Windows certainly advertises this as a selling point. No one enjoys having to drop everything because something (or a crash) requires a reboot.

OK, now there’s the hypersior option (Microsoft’s Virtual PC, Xen, VMWare, etc.). Call me silly, call me crazy, but a modern OS sucks up a good amount of RAM and CPU time, even the more efficient ones like the *Nix’s. Unless all of the OS’s you’re running are using microkernels, or aren’t doing much of anything, you can count on having to double your RAM and increase your CPU requirement by at least 25% in order to have two OS’s running on the same hardware simultaneously. And if you have one OS running CPU intensive activities on a regular basis while you work on the other OS, you should go ahead and increase that CPU power by 50% to 100%. Well, what if you are already running a top-of-the-line CPU, or close to it? I guess you’ll just have to suffer performance way below what a single OS would deliver on the same hardware. Not to mention hard drives. Unless you want to have two hard drives (in a hypervisor situation, where each OS has its own volume, and a shared volume for common data, I’d reccomend three drives), each OS is going to be trying to read and write from opposite ends of the disk simultaneously. Hooray. It’ll be like deliberately making myself use tons of swap file space in terms of speed. And we still haven’t overcome the file system issue. I hope you enjoy having all of your data readable by anyone who gets access to your system, because all of that shared data will be on a FAT32 partition. Not to mention that FAT32 is just about the least efficient file system out there, unless you count FAT16. Not to mention the lack of nifty features like NTFS’s built in compression and encryption, or HFS+’s metadata support (a big selling support for MacOSX), journaling, hard and symbolic links, etc. Or alternatively, you could just have everything in NTFS, and the MacOSX system wouldn’t be able to write to it. Just what I’ve always wanted, a 250 GB CD-ROM disk. My other option (probably the best one, sad to say) is to have the common file system be in a native format for one of the two OSs, and then share it via SMB. Yucky, but at least both OS’s will be able to read and write to the data, and maintain some of its native file system benefits. So let’s add up this hypervisor nonsense. You are almost doubling your hardware power to acheive the same results, the only thing that you’re sharing are the peripherals and optical drives. At that point, doesn’t it almost make sense (except for power consumption) to get a Mac and a PC separate and  a KVM switch from a cost standpoint? Going to the high end of the CPU chain (and the motherboard to support all of this RAM and the fancy CPU) is about as expensive as having the two machines sitting next to each other.

Even with OSX running natively on Intel hardware, the applications are not running natively on Intel hardware. They are being emulated, via “Rosetta”. I’m not a big fan of hardware emulation, even when it isn’t buggy, it is still slow. Quite frankly, I’d prefer a Mac that is 10% slower on the hardware running native apps, than a Mac running 10% faster emulating another chip for 80% of my apps. That’s just common sense.

Oh yeah, and there’s one more catch: MacOSX x86 will only run on Apple hardware. I’m sure that there will be XP drivers for this hardware soon enough, that’s not a concern. But do you honestly think that Apple will stop charging the “Apple Tax” just because they’ve switch to Intel? Sure, G5s are more expensive that x86 chips on a pound-for-pound basis, but not nearly by the same ratio that a Mac is more expensive than an equivalent machine. Compare the Mac mini to some of the low priced options from Dell and eMachines/Gateway. The Mac mini costs about 20% more, and comes with less goodies typically. So yes, the price of a Mac will come down, but by what? $50? Maybe $100? It still puts a PowerMac or even an iBook out of the price range of mortal men. It makes the Mac mini, and the eMac slightly more affordable, that’s it.

But all of my eagerly-waiting pals say, “but I won’t use Apple’s hardware, I’m sure someone will release a ‘patch’ to let me run MacOSX x86 on my existing hardware, and someone will have drivers.” Good luck my friend. First of all, I’m not a big fan of ripping a company off. The profits that Apple makes from their overpriced hardware directly support their continued development of OSX. Deprive Apple of their R&D budget by not buying their hardware, and either the price of OSX goes up (the operating system that charges you for minor version upgrades as it is), or they put less money into developing it. Furthermore, if there is one thing I’m very particular about, it’s stuff like hacking the internals of my operating system and messing with my device drivers. This is the kind of thing that leads to OS instability. I’m not a big fan of OS instability, otherwise I might still be using Windows 98, which would run a heck of a lot faster than XP does for me. This is why I avoid third-party “system tweak tools” like the plague. This is why I don’t let spyware or rootkits on my system. This is why I don’t upgrade my drivers unless I’m actually having a problem, or unless the drivers supports something I desparately need to do. This is why I avoid real-time virus scanning. I don’t do these things because an operating system is under enough stress as it is, without some bonehead messing with its internals. Furthermore, is someone who hacks an operating system up to make it run in violation of its license agreement someone I trust to give me an otherwise clean and unmodified OS? I think not. People downloading warez and MP3s through P2P services like GNUtella and BitTorrent are getting killed by virues, spyware, etc. like the plague. Someone who goes through the effort of cracking an installer could just as easily through something nasty in there for you as well. I would not trust my OS to come from such a source, and neither should you.

So at the end of the day, where are we? To effectively use Mac OSX on Intel architechture, it won’t be any different than it is today. It won’t be too much faster, if you want to use PC apps you should still have a PC sitting right next to it, and you’ll still be paying through the nose for Apple’s hardware. As excited as I am to get onto OSX as soon as my wallet allows, I don’t see how this gets me there any different or faster.

As a parting shot, to all of those who were actually surprised that Apple had an x86 iversion in the works, I simply point you to the “Ask the developers” page on Apple’s site (note the date of when this page was put together, 2001). Also take a look at the source code tree. Darwin (the underlying OS) has been available on x86 architecture since Day 1. Sure, the GUI isn’t on here, but the OS itself is half the battle. Microsoft had a PPC version of NT when it was NT 3.1, 3.5, and 3.51 (they also had a version running on SPARC!), and maybe even in NT 4 (memory is weak on that one too). An XBox today runs on a PPC chip; it also runs on what is admitedly a modified version of Windows 2000. Every OS manufacturer out there keeps a seprate port tree for CPUs they don’t support, it’s common practice, and a smart one too. It leaves their options open in a way that’s a lot cheaper than if they suddenly find themselves without a chipset to support anymore. Plus, it gives them leverage with the hardware folks.

At the end of the day, no matter how I slice and dice it, I simply fail to see why OSX x86 is such a big deal. Yes, Intel chips are better on power usage, a win for the laptop users out there (BTW, has anyone noticed how battery technology lags so far behind power usage?). Yes, Intel has a better roadmap than IBM has for the G5 line. My heart rate might have gone up for a split second if Apple announced that they were witching to AMD64 technology, but they aren’t. There is nothing to be worked up over on this, and this is certainly not a world changing event. If anyone can explain in a reasonable way why this is actually worth getting excited about, please let me know and  I’d be grateful to concede defeat.