General discussion

Locked

Critical Thinking

By Justin James Contributor ·
Tags: Off Topic
blog root

This conversation is currently closed to new comments.

609 total posts (Page 1 of 61)   01 | 02 | 03 | 04 | 05   Next
| Thread display: Collapse - | Expand +

All Comments

Collapse -

Intel Macs, what's the big deal?

by Justin James Contributor In reply to Critical Thinking

<p>Maybe I'm just being cynical, but I fail to see why Apple's decision to move to the Intel architecture is such a big deal. Let's get real here folks, it's simply a change in CPUs.</p>
<p>OK, so maybe there's more to it. So the Macs may drop a touch in price, and/or get a bit faster. Personally, I've been wanting to get a Mac for a while, ever since I put together a server on BSD. I loved BSD's reliability and speed, especially compared to the Windows 2003 Enterprise server that it replaced. Put simply, BSD rocks, and the idea of using a computer built on BSD, plus a <em>usable</em> GUI (call me crazy, but I think X Windows, at least in every X environment I've worked in, is total garbage to deal with, just a fancy way of handling multiple shell sessions) would be really nice. When Apple announced the Mac mini, I was extremely happy, and started saving my pennies. Sure, I could get a pretty decent PC for the same money, but I'd still have the same PC problems. And frankly, the perceived lack of software for Macs doesn't bother me too much, because my home PC doesn't do anything that I can't do on a Mac or a BSD system. So I was already prepared to take the Mac plunge. I've even downloaded installers for all of my common software, just wanting to FTP it over the moment I get it plugged in.</p>
<p>Am I holding off on getting that Mac mini because I want Intel. Heck no. I'm holding off for the same reason my home PC is an Athlon 1900+ with 256 MB RAM: I have higher financial priorities at the moment. But I'm looking forwards to it.</p>
<p>When Apple announced that they would be switching to Intel chips, everyone made such a big deal about it. I didn't see the cause for the hoopla then, no do I see it now. OK, maybe IBM didn't have the roadmap for the G5 that Apple wanted, and maybe they were having some bad karma with Motorola. But at the end of the day, none of my Mac using friends ever complained about having a slow Mac, they are all delighted with it. The current G5 chips are good enough for now, and moving to Intel is a simple business decision to keep the future as bright as the present.</p>
<p>So I started asking around, trying to find out what the big deal is. My Mac friends could not care less. As long as they're using Mac OSX, they don't care if there are ferrets running inside the box handing delivering piece of paper with ones and zero written on them to each other. They just love the OS. My PC friends are all delighted because they have these grandiose dreams of dual booting.</p>
<p>Sorry folks, I've been down the dual boot route. NT4 & Windows 95. OS/2 Warp and Windows 3.1. Windows 98 & BeOS (yes, I tried BeOS, loved it to death, no applications or drivers, sad to say). Indeed, BeOS was originally designed for the PowerPC, then ported over to x86 when Be couldn't sell any of their boxes. I did Windows 98 and Windows 2000 for a while too. But at the end of the day, I always despised dual booting. Life is always more miserable when you dual boot. Many advantages of each OS are tied to the file system. NTFS is the cornerstone of NT/XP's security system. HPFS was integral to OS/2 Warp. And I'm sure that HFS+ plays a large role in OSX's capabilities. Yes, I'm aware that many of the OS's I've listed can read NTFS. But they can't write to NTFS. No Microsoft OS reads or writes to anything other than FAT16/32 and NTFS (actually, NT 4 may have been able to handle HPFS if memory serves). The point is, you're going to have a situation where you're going to end up sticking a giant FAT32 partition somewhere in your system for your common data files, plus have two more system partitions, one for NTFS and one for HFS+. And to be honest, I 100% hate that idea. I jump through hoops to only have one volume mounted in my system, I don't like dealing with drive letters (I know that MacOSX doesn't use drive letters). It's a pain in the rear to have to figure out which directory a file goes into, not just based upon its contents, but also upon which directory or volume or whatever has space remaining.</p>
<p>Plus, dual booting is a huge waste of time and interupts my workflow. People buy faster computer parts because they don't want to wait thirty seconds to two minutes starting an application. But with dual booting, this is exactly what you'll be doing. Need an application that runs on the OS you're not currently working with? Well, you get to stop EVERYTHING you are doing and reboot. Heaven help you if you miss the boot loader and end up in the wrong OS. Furthermore, isn't one of the reasons why we like our newer operating systems is because we have to boot less often? Every new version of Windows certainly advertises this as a selling point. No one enjoys having to drop everything because something (or a crash) requires a reboot.</p>
<p>OK, now there's the hypersior option (Microsoft's Virtual PC, Xen, VMWare, etc.). Call me silly, call me crazy, but a modern OS sucks up a good amount of RAM and CPU time, even the more efficient ones like the *Nix's. Unless all of the OS's you're running are using microkernels, or aren't doing much of anything, you can count on having to double your RAM and increase your CPU requirement by at least 25% in order to have two OS's running on the same hardware simultaneously. And if you have one OS running CPU intensive activities on a regular basis while you work on the other OS, you should go ahead and increase that CPU power by 50% to 100%. Well, what if you are already running a top-of-the-line CPU, or close to it? I guess you'll just have to suffer performance way below what a single OS would deliver on the same hardware. Not to mention hard drives. Unless you want to have two hard drives (in a hypervisor situation, where each OS has its own volume, and a shared volume for common data, I'd reccomend three drives), each OS is going to be trying to read and write from opposite ends of the disk simultaneously. Hooray. It'll be like deliberately making myself use tons of swap file space in terms of speed. And we still haven't overcome the file system issue. I hope you enjoy having all of your data readable by anyone who gets access to your system, because all of that shared data will be on a FAT32 partition. Not to mention that FAT32 is just about the least efficient file system out there, unless you count FAT16. Not to mention the lack of nifty features like NTFS's built in compression and encryption, or HFS+'s metadata support (a big selling support for MacOSX), journaling, hard and symbolic links, etc. Or alternatively, you could just have everything in NTFS, and the MacOSX system wouldn't be able to write to it. Just what I've always wanted, a 250 GB CD-ROM disk. My other option (probably the best one, sad to say) is to have the common file system be in a native format for one of the two OSs, and then share it via SMB. Yucky, but at least both OS's will be able to read and write to the data, and maintain some of its native file system benefits. So let's add up this hypervisor nonsense. You are almost doubling your hardware power to acheive the same results, the only thing that you're sharing are the peripherals and optical drives. At that point, doesn't it almost make sense (except for power consumption) to get a Mac and a PC separate and  a KVM switch from a cost standpoint? Going to the high end of the CPU chain (and the motherboard to support all of this RAM and the fancy CPU) is about as expensive as having the two machines sitting next to each other.</p>
<p>Even with OSX running natively on Intel hardware, the applications are not running natively on Intel hardware. They are being emulated, via "Rosetta". I'm not a big fan of hardware emulation, even when it isn't buggy, it is still slow. Quite frankly, I'd prefer a Mac that is 10% slower on the hardware running native apps, than a Mac running 10% faster emulating another chip for 80% of my apps. That's just common sense.</p>
<p>Oh yeah, and there's one more catch: MacOSX x86 will only run on Apple hardware. I'm sure that there will be XP drivers for this hardware soon enough, that's not a concern. But do you honestly think that Apple will stop charging the "Apple Tax" just because they've switch to Intel? Sure, G5s are more expensive that x86 chips on a pound-for-pound basis, but not nearly by the same ratio that a Mac is more expensive than an equivalent machine. Compare the Mac mini to some of the low priced options from Dell and eMachines/Gateway. The Mac mini costs about 20% more, and comes with less goodies typically. So yes, the price of a Mac will come down, but by what? $50? Maybe $100? It still puts a PowerMac or even an iBook out of the price range of mortal men. It makes the Mac mini, and the eMac slightly more affordable, that's it.</p>
<p>But all of my eagerly-waiting pals say, "but I won't use Apple's hardware, I'm sure someone will release a 'patch' to let me run MacOSX x86 on my existing hardware, and someone will have drivers." Good luck my friend. First of all, I'm not a big fan of ripping a company off. The profits that Apple makes from their overpriced hardware directly support their continued development of OSX. Deprive Apple of their R&amp budget by not buying their hardware, and either the price of OSX goes up (the operating system that charges you for minor version upgrades as it is), or they put less money into developing it. Furthermore, if there is one thing I'm very particular about, it's stuff like hacking the internals of my operating system and messing with my device drivers. This is the kind of thing that leads to OS instability. I'm not a big fan of OS instability, otherwise I might still be using Windows 98, which would run a heck of a lot faster than XP does for me. This is why I avoid third-party "system tweak tools" like the plague. This is why I don't let spyware or rootkits on my system. This is why I don't upgrade my drivers unless I'm actually having a problem, or unless the drivers supports something I desparately need to do. This is why I avoid real-time virus scanning. I don't do these things because an operating system is under enough stress as it is, without some bonehead messing with its internals. Furthermore, is someone who hacks an operating system up to make it run in violation of its license agreement someone I trust to give me an otherwise clean and unmodified OS? I think not. People downloading warez and MP3s through P2P services like GNUtella and BitTorrent are getting killed by virues, spyware, etc. like the plague. Someone who goes through the effort of cracking an installer could just as easily through something nasty in there for you as well. I would not trust my OS to come from such a source, and neither should you.</p>
<p>So at the end of the day, where are we? To effectively use Mac OSX on Intel architechture, it won't be any different than it is today. It won't be too much faster, if you want to use PC apps you should still have a PC sitting right next to it, and you'll still be paying through the nose for Apple's hardware. As excited as I am to get onto OSX as soon as my wallet allows, I don't see how this gets me there any different or faster.</p>
<p>As a parting shot, to all of those who were actually surprised that Apple had an x86 iversion in the works, I simply point you to the <a href="http://developer.apple.com/darwin/news/qa20010927.html#x86">"Ask the developers" page</a> on Apple's site (<a href="http://developer.apple.com/darwin/news/DarwinQA_TOC.html">note the date of when this page was put together</a>, 2001). Also take a look at the <a href="http://www.opensource.apple.com/darwinsource/">source code tree</a>. Darwin (the underlying OS) has been available on x86 architecture since Day 1. Sure, the GUI isn't on here, but the OS itself is half the battle. Microsoft had a PPC version of NT when it was NT 3.1, 3.5, and 3.51 (they also had a version running on SPARC!), and maybe even in NT 4 (memory is weak on that one too). An XBox today runs on a PPC chip; it also runs on what is admitedly a modified version of Windows 2000. Every OS manufacturer out there keeps a seprate port tree for CPUs they don't support, it's common practice, and a smart one too. It leaves their options open in a way that's a lot cheaper than if they suddenly find themselves without a chipset to support anymore. Plus, it gives them leverage with the hardware folks.</p>
<p>At the end of the day, no matter how I slice and dice it, I simply fail to see why OSX x86 is such a big deal. Yes, Intel chips are better on power usage, a win for the laptop users out there (BTW, has anyone noticed how battery technology lags so far behind power usage?). Yes, Intel has a better roadmap than IBM has for the G5 line. My heart rate might have gone up for a split second if Apple announced that they were witching to AMD64 technology, but they aren't. There is nothing to be worked up over on this, and this is certainly not a world changing event. If anyone can explain in a reasonable way why this is actually worth getting excited about, please let me know and  I'd be grateful to concede defeat.</p>

Collapse -

Intel Macs, what's the big deal?

by SBD In reply to Intel Macs, what's the bi ...
Collapse -

Intel Macs, what's the big deal?

by salmonslayer In reply to Intel Macs, what's the bi ...

One word -- KVM (okay, so actually it isn't a word but an acronym)<br />
<br />
I have also gone down the dual-boot road, and ran into the same
roadblocks.  For a while I had three OSs on my primary workstation
(Linux, OS/2 and Windows 9.  I rarely ran OS/2 -- I had some
great graphics programs (and still think DeScribe is one of the best
word processors out there) but it took too much time to shut down,
restart, boot in another OS and then do the work.  OS/2 was the
first to go.  I managed to get a second computer relatively cheap
so now have two systems connected via a KVM switch.  One has
Windows XP and one has Linux, and both are networked.  The best
thing is that I can bounce back and forth with only a simple keystroke,
can access files from either system, and generally use the best of both
worlds.  This makes life considerably easier than the old days,
and sometimes I really wonder why I bothered with dual-booting at all.<br />

Collapse -

Thin Computing Is Rarely "Thin"

by Justin James Contributor In reply to Critical Thinking

<p>For some reason, a huge number of people on ZDNet, both in TalkBacks and in articles (Mr. Berlind, are you reading?) confuse a central file server with "thin computing".</p>
<p><a href="http://blogs.zdnet.com/BTL/index.php?p=2135&tag=nl.e539">Mr. Berlind's example situation</a> (a really bad one, at that) is simply using a web browser as a thick client application to access a central file server. Especially in the AJAX world, clients get <em>thicker</em> not <em>thinner</em>! For example, I have been doing some writing on this website right here. Their blogging software puts such a heavy demand on my system at home, that it was taking up to thirty seconds for text to appear on the screen. Each keystroke caused my cursor to be the "arrow + hourglass" cursor. Granted, my computer at home is no screamer (Athlon 1900+, 256 MB RAM) but that is what AJAX does. It requires an extraordinaily thick client to run. If I compared that AJAXed system to, say, using MS Word (a "thick client" piece of software) or SSHing to a BSD box and running vi ( a true "thin client" situation), that AJAX system comes out dead last in terms or CPU usage. And after my system does all of the processing for formatting and whatnot, what happens? It stores the data (via HTTP POST) to a central server, which stores that information somewhere, performs a bit of error checking and then makes a few entries into a database table somewhere. If I compare the CPU usage on my system by the AJAX interface, to the CPU usage of the server to file my entry, it sure doesn't look like a "thin client/thick server" situation to me! It looks a heck of a lot closer to "thick client/dumb file server" story.</p>
<p>Some of the comments to this story are already making this mistake. "Oh, I like the idea of having all of my data on a central server." So do I. This is how everyone except for small businesses and home users have been doing it for <em>decades</em>. The fact that most business people stick all of their documents onto their local hard drives is due to a failure of the IT departments to do their jobs properly, for which they should be fired. Why are people in marketing sticking their data on the local drive instead of on the server? Anyone who saves data locally and loses it should be fired too, because if their IT department did the job properly, this would not be possible without some hacking going on. The correct setup (at least in a Windows network, which is what most people are using, even if it's *Nix with Samba on the backend) should that there is a policy in place which sets "My Documents" to a network drive. This network drives gets backed up every night, blah blah blah. The only things that go onto the local drive should be the operating system, software that needs to be stored locally (or is too large to carry over the network in a timely fashion), and local settings. That's it. And if someone's PC blows up, you can just flash a new image onto a drive and they're up and running again in a few minutes.</p>
<p>Now, once we get away from standard IT best practices, what is "thin client computing"? We're already storing our data on a dumb central server, but doing all of the processing locally. Is AJAX "thin computing"? Not really, since the client needs to be a lot thicker than the server. Is installing Office to a central computer "thin computing" but having it run locally? Not at all. Yet people (Mr. Berlind, for starters) seem to think that storing a Java JAR file or two on a central server, downloading it "on demand" and running it locally is thin computing.</p>
<p>Thin computing does not occur until the vast majority of the processing occurs on the central server. That is it. Even browsing the web is not "thin computing". Note that a dinky little Linux or BSD server can dole out hundreds or thousands of requests per minutes. I challenge you to have your PC render hundreds or thousands of web pages per minute. Indeed, even a Windows 2003 server can process a hundred complex ASP.Net requests in the amount of time it takes one of its clients to render on the screen one of those requests. I don't call that "thin computing".</p>
<p>Citrix is "thin computing". Windows Terminal Services/Remote Desktop is "thin computing". WYSE green screens are "thin computing". X Terminals (if the "client" {remember, X Windows has "client"  and "server" backwards} is not local) are "thin computing. Note what each one of these systems have in common. They are focused on having the display rendered remotely, then transferred bit-by-bit to the client, which merely replicates what is rendered by the server. All the client does is transfer the user's input directly to the server, which then sends the results of that input to the client, which renders the feedback as a bitmap or text. None of these system require any branching logic or computing or number crunching or whatever by the client. That is thin computing.</p>
<p>Stop confusing a client/server network with "thin computing". Just stop. Too many articles and comments that I have seen do this. They talk about the wonders of thin computing, like being able to have all of the data in a central repository to be backed up, or have all of my user settings in a central repository to follow me wherever I go, or whatever. I really don't see anyone saying anything that is "thin computing" that does already exist in the current client/server model. The only thing that seems to be changing is that people are leaving protocols like NFS and SMB for protocols like HTTP. It's a network protocol folks. Wake up. It's pretty irrelevant how the data gets tranferred over the wire, or what metadata is attached or whatever. It does not matter. All that matters is what portion of the processing occurs on the client versus the server, and in all of these siutations people are listing, the client is still doing the heavy lifting.</p>

Collapse -

Thin Computing Is Rarely

by Jay Garmon Contributor In reply to Thin Computing Is Rarely ...

Preach on, brother!

Collapse -

Thin Computing Is Rarely

by jdgeek In reply to Thin Computing Is Rarely ...

<p>OK, can we use the term psuedo-thin without raising your ire?  There is certainly a noteworthy difference between a computing environment that relies on standards compliant web clients as the one managed application versus having a seperate client for each task. Maybe thin versus thick is not the best description of that difference.  Psuedo-thin may not be truly thin, but it is at least thinner on the administrative side, and arguably thinner on the client side.  It seems the real difference is in the network intelligence.  In psuedo-thin, not only the data, but also the logic comes from the server.  In this way psuedo-thin is like truly thin, even if the interpretation and rendering take more processing horsepower on the client.</p>
<p>Instead of belittling others, why don't you suggest a new terminology?  You might go down in history as the guy who first identified the dumb thick client.  Dumb....thick... wait, I've got it!  It's the Anna Nicole client.</p>

Collapse -

Thin Computing Is Rarely

<p>Sure, we can say "pseudo-thin". :) You make a very good point, that with a web-application, all of the logic is stored and maintained on the central server, even if the execution of the logic occurs on the clients' side. That is a large benefit of a web application (and that the application can be accessed from a wide variety of clients, although cross-platform compatability is still a huge issue). You can also do the same thing, however, through a centrally managed "push" system like Microsoft Systems Manager. To me, there really isn't too much difference on a logical basis between an application where the installation gets pushed by a central server to a client, and run from the client's local system when needed, and an application that gets pulled by the client when wanted. They both have their advantages and disadvantages. An application push can send an incredibly rich, native application, and only needs to do it once. Once the initial push is over, the client doesn't need to interact with that server anymore. A pull, on the other hand, forces the application to be relatively lightweight. Imagine trying to pull MS Word or OpenOffice down the pipe everytime you want to use it... that would be pretty miserable. Sure, you could cache it, but at that point, the hardware resources needed for a push and a pull are now the same, and the two become nearly indistinguishable in terms of functionality, except that with a cached pull model, the break in workflow occurs the first time you try to use the application, whereas a push model interrupts you (or hopefully, works in the background) at random times like bootup.</p>
<p>Sure, I focused a bit on terminology in the blog, but the underlying assumptions are what I'm attacking. People are tossing around phrases like "thin computing" which have a meaning of their own, when what they really mean is "client/server network with advanced functionality" or (to use your term), "pseudo-thin client". Improper usage of terminology leads to miscommunication and misunderstanding. If I used the word "red" where most people use the word "pink", I'm not going to do a very good job working as a sales person for clothing, particularly over the phone. If people misuse the term "thin computing", they aren't doing a very good job at communicating, especially if they are paid journalists.</p>
<p>J.Ja</p>

Collapse -

Thin Computing Is Rarely

by jdgeek In reply to Thin Computing Is Rarely ...

<p>I agree with you about terminology, sometimes I just can't help playing devil's advocate.</p>
<p>Another significant advantage to the web services approach is a kind of sandboxing.  While you are correct that push vs. pull is probably not a major difference, there is a major difference in having only one application (i.e. the browser) run code natively.  I believe this significanlty decreases your security exposure.</p>
<p>Also, I assume it is easier to deploy non-standard, custom, or lightly used apps using a web service.  Although I don't have any experience with SMS, by deploying apps through a web server, you move to a two phase develop then deploy model as apposed to the SMS develop, package, deploy model.  I'm not sure how difficult it is to package an application in SMS, but I'm sure that creating a cross platform package that does not break existing applications is not necessarily trivial.</p>
<p>Anyhow, good work on the thin client blog and thanks for an interesting discussion.</p>

Collapse -

Thin Computing Is Rarely

by stress junkie In reply to Thin Computing Is Rarely ...

I agree with J.Ja. There have been many instances when a given term has
enjoyed a clear definition for many years, then one day people start to
misuse the term. The next thing you know the original definition is
lost. This is the inspiration for the expression "Newbies ruin
everything." which I have been saying for a long time.

Collapse -

Thin Computing Is Rarely

by jcagle In reply to Thin Computing Is Rarely ...

I have to disagree with one point made in this article.

In this article, it said no one should be saving their files to the local drives, but to the server. With a thin client, that may be what you have to do.

However, I've found that working from the network is generally a bad idea. Networks go down sometimes, and they can be slowed down by too many people working from the network.

At my school, they recommend we work from the hard drive. I'm going to school for graphic design and web development. When you start working with Photoshop and Illustrator, you do not want to work over the network. We work from the hard drive and back up to the network server, USB Flash drive, CD-R, etc.

Maybe it won't seem like much of a problem on a smaller network if you're just handling Word documents or something. But I think in most cases, working from the hard drive and backing up to server, flash drive, CD-R, etc is the best idea.

Again, this isn't a thin computing situation, but the standard IT practices. Trust me, I don't want to be working in Photoshop with a project due, working on the network, and then the network goes down or is slowed down due to the fact that huges files are being worked on over the network.

Back to After Hours Forum
609 total posts (Page 1 of 61)   01 | 02 | 03 | 04 | 05   Next

Related Discussions

Related Forums